2026-04-05 00:00:08.257320 | Job console starting 2026-04-05 00:00:08.267498 | Updating git repos 2026-04-05 00:00:08.821394 | Cloning repos into workspace 2026-04-05 00:00:09.207761 | Restoring repo states 2026-04-05 00:00:09.243868 | Merging changes 2026-04-05 00:00:09.243889 | Checking out repos 2026-04-05 00:00:09.690301 | Preparing playbooks 2026-04-05 00:00:10.867787 | Running Ansible setup 2026-04-05 00:00:19.625664 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-05 00:00:21.365070 | 2026-04-05 00:00:21.365199 | PLAY [Base pre] 2026-04-05 00:00:21.403150 | 2026-04-05 00:00:21.403276 | TASK [Setup log path fact] 2026-04-05 00:00:21.475340 | orchestrator | ok 2026-04-05 00:00:21.544448 | 2026-04-05 00:00:21.545215 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 00:00:21.639145 | orchestrator | ok 2026-04-05 00:00:21.727282 | 2026-04-05 00:00:21.727416 | TASK [emit-job-header : Print job information] 2026-04-05 00:00:21.850077 | # Job Information 2026-04-05 00:00:21.850214 | Ansible Version: 2.16.14 2026-04-05 00:00:21.850244 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-05 00:00:21.850272 | Pipeline: periodic-midnight 2026-04-05 00:00:21.850291 | Executor: 521e9411259a 2026-04-05 00:00:21.850309 | Triggered by: https://github.com/osism/testbed 2026-04-05 00:00:21.850326 | Event ID: 1928b94beaae403ebd11dd0b50186fab 2026-04-05 00:00:21.871706 | 2026-04-05 00:00:21.871811 | LOOP [emit-job-header : Print node information] 2026-04-05 00:00:22.550954 | orchestrator | ok: 2026-04-05 00:00:22.551156 | orchestrator | # Node Information 2026-04-05 00:00:22.551189 | orchestrator | Inventory Hostname: orchestrator 2026-04-05 00:00:22.551210 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-05 00:00:22.551229 | orchestrator | Username: zuul-testbed02 2026-04-05 00:00:22.551246 | orchestrator | Distro: Debian 12.13 2026-04-05 00:00:22.551265 | orchestrator | Provider: static-testbed 2026-04-05 00:00:22.551283 | orchestrator | Region: 2026-04-05 00:00:22.551300 | orchestrator | Label: testbed-orchestrator 2026-04-05 00:00:22.551317 | orchestrator | Product Name: OpenStack Nova 2026-04-05 00:00:22.551333 | orchestrator | Interface IP: 81.163.193.140 2026-04-05 00:00:22.561587 | 2026-04-05 00:00:22.561692 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:23.861908 | orchestrator -> localhost | changed 2026-04-05 00:00:23.868284 | 2026-04-05 00:00:23.868377 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-05 00:00:27.521620 | orchestrator -> localhost | changed 2026-04-05 00:00:27.537377 | 2026-04-05 00:00:27.537506 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-05 00:00:28.610022 | orchestrator -> localhost | ok 2026-04-05 00:00:28.615720 | 2026-04-05 00:00:28.615810 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-05 00:00:28.654113 | orchestrator | ok 2026-04-05 00:00:28.689221 | orchestrator | included: /var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-05 00:00:28.709968 | 2026-04-05 00:00:28.710065 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-05 00:00:33.833054 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-05 00:00:33.833234 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/b1a84e86d2ef42df8cc88d5fcfa34ba1_id_rsa 2026-04-05 00:00:33.833266 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/b1a84e86d2ef42df8cc88d5fcfa34ba1_id_rsa.pub 2026-04-05 00:00:33.833290 | orchestrator -> localhost | The key fingerprint is: 2026-04-05 00:00:33.833309 | orchestrator -> localhost | SHA256:Omg5yp3uba8SvpQv8K8YdxGXQif+dAnO5JSAV30nug0 zuul-build-sshkey 2026-04-05 00:00:33.833328 | orchestrator -> localhost | The key's randomart image is: 2026-04-05 00:00:33.833357 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-05 00:00:33.833376 | orchestrator -> localhost | | .+o*o | 2026-04-05 00:00:33.833394 | orchestrator -> localhost | | .o.O.o..o . | 2026-04-05 00:00:33.833411 | orchestrator -> localhost | | .+ B oo o | 2026-04-05 00:00:33.833429 | orchestrator -> localhost | | * .E | 2026-04-05 00:00:33.833445 | orchestrator -> localhost | | . S + | 2026-04-05 00:00:33.833466 | orchestrator -> localhost | | . .+ o . . | 2026-04-05 00:00:33.833483 | orchestrator -> localhost | | .+O.+ | 2026-04-05 00:00:33.833500 | orchestrator -> localhost | | . O=*.. | 2026-04-05 00:00:33.833530 | orchestrator -> localhost | | +oBB=o. | 2026-04-05 00:00:33.833547 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-05 00:00:33.833594 | orchestrator -> localhost | ok: Runtime: 0:00:03.607024 2026-04-05 00:00:33.839548 | 2026-04-05 00:00:33.839646 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-05 00:00:33.887143 | orchestrator | ok 2026-04-05 00:00:33.908747 | orchestrator | included: /var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-05 00:00:33.924421 | 2026-04-05 00:00:33.924538 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-05 00:00:34.001755 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:34.008265 | 2026-04-05 00:00:34.008357 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-05 00:00:35.095542 | orchestrator | changed 2026-04-05 00:00:35.100844 | 2026-04-05 00:00:35.100931 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-05 00:00:35.429442 | orchestrator | ok 2026-04-05 00:00:35.434679 | 2026-04-05 00:00:35.434756 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-05 00:00:35.917752 | orchestrator | ok 2026-04-05 00:00:35.925976 | 2026-04-05 00:00:35.926070 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-05 00:00:36.373604 | orchestrator | ok 2026-04-05 00:00:36.379043 | 2026-04-05 00:00:36.379126 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-05 00:00:36.441583 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:36.447147 | 2026-04-05 00:00:36.447236 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-05 00:00:37.488351 | orchestrator -> localhost | changed 2026-04-05 00:00:37.499610 | 2026-04-05 00:00:37.499707 | TASK [add-build-sshkey : Add back temp key] 2026-04-05 00:00:38.347320 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/b1a84e86d2ef42df8cc88d5fcfa34ba1_id_rsa (zuul-build-sshkey) 2026-04-05 00:00:38.347492 | orchestrator -> localhost | ok: Runtime: 0:00:00.026189 2026-04-05 00:00:38.353550 | 2026-04-05 00:00:38.353625 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-05 00:00:38.919007 | orchestrator | ok 2026-04-05 00:00:38.930888 | 2026-04-05 00:00:38.930981 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-05 00:00:38.956159 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:39.031227 | 2026-04-05 00:00:39.031328 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-05 00:00:39.535597 | orchestrator | ok 2026-04-05 00:00:39.557299 | 2026-04-05 00:00:39.557397 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-05 00:00:39.627915 | orchestrator | ok 2026-04-05 00:00:39.641301 | 2026-04-05 00:00:39.641409 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:40.556650 | orchestrator -> localhost | ok 2026-04-05 00:00:40.562478 | 2026-04-05 00:00:40.562584 | TASK [validate-host : Collect information about the host] 2026-04-05 00:00:42.275130 | orchestrator | ok 2026-04-05 00:00:42.292005 | 2026-04-05 00:00:42.292106 | TASK [validate-host : Sanitize hostname] 2026-04-05 00:00:42.387210 | orchestrator | ok 2026-04-05 00:00:42.391568 | 2026-04-05 00:00:42.391645 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-05 00:00:44.009572 | orchestrator -> localhost | changed 2026-04-05 00:00:44.014581 | 2026-04-05 00:00:44.014660 | TASK [validate-host : Collect information about zuul worker] 2026-04-05 00:00:44.532419 | orchestrator | ok 2026-04-05 00:00:44.536918 | 2026-04-05 00:00:44.536997 | TASK [validate-host : Write out all zuul information for each host] 2026-04-05 00:00:45.857489 | orchestrator -> localhost | changed 2026-04-05 00:00:45.866793 | 2026-04-05 00:00:45.866893 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-05 00:00:46.156089 | orchestrator | ok 2026-04-05 00:00:46.173799 | 2026-04-05 00:00:46.173901 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-05 00:02:08.575576 | orchestrator | changed: 2026-04-05 00:02:08.575800 | orchestrator | .d..t...... src/ 2026-04-05 00:02:08.575836 | orchestrator | .d..t...... src/github.com/ 2026-04-05 00:02:08.575862 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-05 00:02:08.575884 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-05 00:02:08.575905 | orchestrator | RedHat.yml 2026-04-05 00:02:08.597018 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-05 00:02:08.597037 | orchestrator | RedHat.yml 2026-04-05 00:02:08.597093 | orchestrator | = 1.53.0"... 2026-04-05 00:02:27.693377 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-05 00:02:27.712864 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-05 00:02:27.840987 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-05 00:02:28.552082 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-05 00:02:28.613221 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-05 00:02:29.141676 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:29.202559 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-05 00:02:29.809465 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:29.809550 | orchestrator | 2026-04-05 00:02:29.809557 | orchestrator | Providers are signed by their developers. 2026-04-05 00:02:29.809563 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-05 00:02:29.809574 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-05 00:02:29.809609 | orchestrator | 2026-04-05 00:02:29.809614 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-05 00:02:29.809619 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-05 00:02:29.809632 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-05 00:02:29.809643 | orchestrator | you run "tofu init" in the future. 2026-04-05 00:02:29.810123 | orchestrator | 2026-04-05 00:02:29.810169 | orchestrator | OpenTofu has been successfully initialized! 2026-04-05 00:02:29.810193 | orchestrator | 2026-04-05 00:02:29.810199 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-05 00:02:29.810204 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-05 00:02:29.810208 | orchestrator | should now work. 2026-04-05 00:02:29.810212 | orchestrator | 2026-04-05 00:02:29.810216 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-05 00:02:29.810220 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-05 00:02:29.810232 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-05 00:02:29.985652 | orchestrator | Created and switched to workspace "ci"! 2026-04-05 00:02:29.985748 | orchestrator | 2026-04-05 00:02:29.985765 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-05 00:02:29.985779 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-05 00:02:29.985791 | orchestrator | for this configuration. 2026-04-05 00:02:30.404634 | orchestrator | ci.auto.tfvars 2026-04-05 00:02:30.418670 | orchestrator | default_custom.tf 2026-04-05 00:02:33.431134 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-05 00:02:33.988976 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-05 00:02:34.194364 | orchestrator | 2026-04-05 00:02:34.194435 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-05 00:02:34.194444 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-05 00:02:34.194449 | orchestrator | + create 2026-04-05 00:02:34.194454 | orchestrator | <= read (data resources) 2026-04-05 00:02:34.194459 | orchestrator | 2026-04-05 00:02:34.194464 | orchestrator | OpenTofu will perform the following actions: 2026-04-05 00:02:34.194476 | orchestrator | 2026-04-05 00:02:34.194481 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-05 00:02:34.194485 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:34.194489 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-05 00:02:34.194494 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:34.194498 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:34.194502 | orchestrator | + file = (known after apply) 2026-04-05 00:02:34.194506 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194529 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.194533 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:34.194538 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:34.194542 | orchestrator | + most_recent = true 2026-04-05 00:02:34.194546 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.194550 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:34.194554 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.194561 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:34.194565 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:34.194569 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:34.194573 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:34.194577 | orchestrator | } 2026-04-05 00:02:34.194583 | orchestrator | 2026-04-05 00:02:34.194587 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-05 00:02:34.194591 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:34.194595 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-05 00:02:34.194599 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:34.194602 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:34.194606 | orchestrator | + file = (known after apply) 2026-04-05 00:02:34.194610 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194614 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.194618 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:34.194621 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:34.194625 | orchestrator | + most_recent = true 2026-04-05 00:02:34.194629 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.194633 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:34.194637 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.194640 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:34.194644 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:34.194648 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:34.194651 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:34.194655 | orchestrator | } 2026-04-05 00:02:34.194659 | orchestrator | 2026-04-05 00:02:34.194663 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-05 00:02:34.194667 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-05 00:02:34.194671 | orchestrator | + content = (known after apply) 2026-04-05 00:02:34.194675 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:34.194679 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:34.194682 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:34.194686 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:34.194690 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:34.194694 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:34.194698 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:34.194701 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:34.194705 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-05 00:02:34.194709 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194712 | orchestrator | } 2026-04-05 00:02:34.194718 | orchestrator | 2026-04-05 00:02:34.194722 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-05 00:02:34.194726 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-05 00:02:34.194729 | orchestrator | + content = (known after apply) 2026-04-05 00:02:34.194733 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:34.194737 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:34.194741 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:34.194744 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:34.194748 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:34.194752 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:34.194756 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:34.194759 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:34.194776 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-05 00:02:34.194780 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194783 | orchestrator | } 2026-04-05 00:02:34.194787 | orchestrator | 2026-04-05 00:02:34.194798 | orchestrator | # local_file.inventory will be created 2026-04-05 00:02:34.194802 | orchestrator | + resource "local_file" "inventory" { 2026-04-05 00:02:34.194806 | orchestrator | + content = (known after apply) 2026-04-05 00:02:34.194809 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:34.194813 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:34.194817 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:34.194821 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:34.194825 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:34.194828 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:34.194832 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:34.194836 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:34.194840 | orchestrator | + filename = "inventory.ci" 2026-04-05 00:02:34.194843 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194847 | orchestrator | } 2026-04-05 00:02:34.194852 | orchestrator | 2026-04-05 00:02:34.194856 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-05 00:02:34.194860 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-05 00:02:34.194864 | orchestrator | + content = (sensitive value) 2026-04-05 00:02:34.194868 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:34.194871 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:34.194875 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:34.194879 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:34.194883 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:34.194887 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:34.194890 | orchestrator | + directory_permission = "0700" 2026-04-05 00:02:34.194912 | orchestrator | + file_permission = "0600" 2026-04-05 00:02:34.194916 | orchestrator | + filename = ".id_rsa.ci" 2026-04-05 00:02:34.194920 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194924 | orchestrator | } 2026-04-05 00:02:34.194927 | orchestrator | 2026-04-05 00:02:34.194931 | orchestrator | # null_resource.node_semaphore will be created 2026-04-05 00:02:34.194935 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-05 00:02:34.194939 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194943 | orchestrator | } 2026-04-05 00:02:34.194948 | orchestrator | 2026-04-05 00:02:34.194952 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-05 00:02:34.194956 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-05 00:02:34.194960 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.194963 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.194967 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.194971 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.194975 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.194979 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-05 00:02:34.194982 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.194986 | orchestrator | + size = 80 2026-04-05 00:02:34.194990 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.194994 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.194998 | orchestrator | } 2026-04-05 00:02:34.195003 | orchestrator | 2026-04-05 00:02:34.195007 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-05 00:02:34.195011 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195014 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195018 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195022 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195029 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195033 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195037 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-05 00:02:34.195041 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195044 | orchestrator | + size = 80 2026-04-05 00:02:34.195048 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195052 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195056 | orchestrator | } 2026-04-05 00:02:34.195061 | orchestrator | 2026-04-05 00:02:34.195065 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-05 00:02:34.195069 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195072 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195076 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195080 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195084 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195088 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195091 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-05 00:02:34.195095 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195099 | orchestrator | + size = 80 2026-04-05 00:02:34.195103 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195107 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195111 | orchestrator | } 2026-04-05 00:02:34.195116 | orchestrator | 2026-04-05 00:02:34.195120 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-05 00:02:34.195123 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195127 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195131 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195135 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195138 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195142 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195146 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-05 00:02:34.195150 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195154 | orchestrator | + size = 80 2026-04-05 00:02:34.195157 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195161 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195170 | orchestrator | } 2026-04-05 00:02:34.195175 | orchestrator | 2026-04-05 00:02:34.195179 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-05 00:02:34.195183 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195187 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195190 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195194 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195198 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195202 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195208 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-05 00:02:34.195212 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195216 | orchestrator | + size = 80 2026-04-05 00:02:34.195220 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195223 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195227 | orchestrator | } 2026-04-05 00:02:34.195233 | orchestrator | 2026-04-05 00:02:34.195236 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-05 00:02:34.195240 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195244 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195248 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195252 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195259 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195263 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195267 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-05 00:02:34.195270 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195274 | orchestrator | + size = 80 2026-04-05 00:02:34.195278 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195282 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195285 | orchestrator | } 2026-04-05 00:02:34.195291 | orchestrator | 2026-04-05 00:02:34.195294 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-05 00:02:34.195298 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:34.195302 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195306 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195310 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195313 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.195317 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195321 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-05 00:02:34.195325 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195328 | orchestrator | + size = 80 2026-04-05 00:02:34.195332 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195336 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195340 | orchestrator | } 2026-04-05 00:02:34.195345 | orchestrator | 2026-04-05 00:02:34.195349 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-05 00:02:34.195353 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195357 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195360 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195364 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195368 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195372 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-05 00:02:34.195376 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195380 | orchestrator | + size = 20 2026-04-05 00:02:34.195383 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195387 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195391 | orchestrator | } 2026-04-05 00:02:34.195396 | orchestrator | 2026-04-05 00:02:34.195400 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-05 00:02:34.195404 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195407 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195411 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195415 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195419 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195423 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-05 00:02:34.195426 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195430 | orchestrator | + size = 20 2026-04-05 00:02:34.195434 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195438 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195441 | orchestrator | } 2026-04-05 00:02:34.195447 | orchestrator | 2026-04-05 00:02:34.195451 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-05 00:02:34.195454 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195458 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195462 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195466 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195469 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195473 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-05 00:02:34.195477 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195484 | orchestrator | + size = 20 2026-04-05 00:02:34.195487 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195491 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195495 | orchestrator | } 2026-04-05 00:02:34.195501 | orchestrator | 2026-04-05 00:02:34.195505 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-05 00:02:34.195509 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195513 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195517 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195520 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195524 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195528 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-05 00:02:34.195532 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195536 | orchestrator | + size = 20 2026-04-05 00:02:34.195539 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195543 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195547 | orchestrator | } 2026-04-05 00:02:34.195551 | orchestrator | 2026-04-05 00:02:34.195556 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-05 00:02:34.195560 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195564 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195568 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195572 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195576 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195579 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-05 00:02:34.195583 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195589 | orchestrator | + size = 20 2026-04-05 00:02:34.195593 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195597 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195601 | orchestrator | } 2026-04-05 00:02:34.195605 | orchestrator | 2026-04-05 00:02:34.195608 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-05 00:02:34.195612 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195616 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195620 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195624 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195627 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195631 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-05 00:02:34.195635 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195639 | orchestrator | + size = 20 2026-04-05 00:02:34.195642 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195646 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195650 | orchestrator | } 2026-04-05 00:02:34.195655 | orchestrator | 2026-04-05 00:02:34.195659 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-05 00:02:34.195663 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195667 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195671 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195674 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195678 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195682 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-05 00:02:34.195686 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195689 | orchestrator | + size = 20 2026-04-05 00:02:34.195693 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195697 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195701 | orchestrator | } 2026-04-05 00:02:34.195706 | orchestrator | 2026-04-05 00:02:34.195710 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-05 00:02:34.195714 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195721 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195725 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195728 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195732 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195736 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-05 00:02:34.195740 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195744 | orchestrator | + size = 20 2026-04-05 00:02:34.195747 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195751 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195755 | orchestrator | } 2026-04-05 00:02:34.195760 | orchestrator | 2026-04-05 00:02:34.195764 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-05 00:02:34.195768 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:34.195772 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:34.195775 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.195779 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.195783 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:34.195787 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-05 00:02:34.195791 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.195794 | orchestrator | + size = 20 2026-04-05 00:02:34.195798 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:34.195802 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:34.195806 | orchestrator | } 2026-04-05 00:02:34.196037 | orchestrator | 2026-04-05 00:02:34.196044 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-05 00:02:34.196048 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-05 00:02:34.196052 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.196056 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.196059 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.196063 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.196067 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.196071 | orchestrator | + config_drive = true 2026-04-05 00:02:34.196075 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.196078 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.196082 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-05 00:02:34.196086 | orchestrator | + force_delete = false 2026-04-05 00:02:34.196090 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.196093 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.196097 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.196101 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.196105 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.196108 | orchestrator | + name = "testbed-manager" 2026-04-05 00:02:34.196112 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.196116 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.196120 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.196123 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.196127 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.196131 | orchestrator | + user_data = (sensitive value) 2026-04-05 00:02:34.196135 | orchestrator | 2026-04-05 00:02:34.196139 | orchestrator | + block_device { 2026-04-05 00:02:34.196143 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.196147 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.196154 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.196158 | orchestrator | + multiattach = false 2026-04-05 00:02:34.196162 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.196165 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196174 | orchestrator | } 2026-04-05 00:02:34.196178 | orchestrator | 2026-04-05 00:02:34.196182 | orchestrator | + network { 2026-04-05 00:02:34.196186 | orchestrator | + access_network = false 2026-04-05 00:02:34.196190 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.196193 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.196197 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.196201 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.196205 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.196209 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196212 | orchestrator | } 2026-04-05 00:02:34.196216 | orchestrator | } 2026-04-05 00:02:34.196222 | orchestrator | 2026-04-05 00:02:34.196226 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-05 00:02:34.196230 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.196234 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.196238 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.196241 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.196245 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.196249 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.196253 | orchestrator | + config_drive = true 2026-04-05 00:02:34.196257 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.196260 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.196264 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.196268 | orchestrator | + force_delete = false 2026-04-05 00:02:34.196272 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.196276 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.196279 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.196283 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.196287 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.196291 | orchestrator | + name = "testbed-node-0" 2026-04-05 00:02:34.196294 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.196298 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.196302 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.196306 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.196310 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.196313 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.196317 | orchestrator | 2026-04-05 00:02:34.196321 | orchestrator | + block_device { 2026-04-05 00:02:34.196325 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.196329 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.196332 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.196336 | orchestrator | + multiattach = false 2026-04-05 00:02:34.196340 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.196344 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196348 | orchestrator | } 2026-04-05 00:02:34.196352 | orchestrator | 2026-04-05 00:02:34.196355 | orchestrator | + network { 2026-04-05 00:02:34.196359 | orchestrator | + access_network = false 2026-04-05 00:02:34.196363 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.196367 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.196371 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.196374 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.196378 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.196382 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196386 | orchestrator | } 2026-04-05 00:02:34.196390 | orchestrator | } 2026-04-05 00:02:34.196396 | orchestrator | 2026-04-05 00:02:34.196400 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-05 00:02:34.196404 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.196408 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.196415 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.196419 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.196422 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.196426 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.196430 | orchestrator | + config_drive = true 2026-04-05 00:02:34.196434 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.196437 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.196441 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.196445 | orchestrator | + force_delete = false 2026-04-05 00:02:34.196449 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.196453 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.196456 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.196460 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.196464 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.196468 | orchestrator | + name = "testbed-node-1" 2026-04-05 00:02:34.196471 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.196475 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.196479 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.196483 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.196487 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.196490 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.196494 | orchestrator | 2026-04-05 00:02:34.196498 | orchestrator | + block_device { 2026-04-05 00:02:34.196502 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.196506 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.196510 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.196513 | orchestrator | + multiattach = false 2026-04-05 00:02:34.196517 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.196521 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196525 | orchestrator | } 2026-04-05 00:02:34.196528 | orchestrator | 2026-04-05 00:02:34.196532 | orchestrator | + network { 2026-04-05 00:02:34.196536 | orchestrator | + access_network = false 2026-04-05 00:02:34.196540 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.196544 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.196548 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.196551 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.196555 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.196559 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196563 | orchestrator | } 2026-04-05 00:02:34.196567 | orchestrator | } 2026-04-05 00:02:34.196669 | orchestrator | 2026-04-05 00:02:34.196675 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-05 00:02:34.196679 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.196683 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.196687 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.196691 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.196695 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.196705 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.196709 | orchestrator | + config_drive = true 2026-04-05 00:02:34.196713 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.196717 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.196720 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.196724 | orchestrator | + force_delete = false 2026-04-05 00:02:34.196728 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.196732 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.196735 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.196743 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.196747 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.196751 | orchestrator | + name = "testbed-node-2" 2026-04-05 00:02:34.196755 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.196758 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.196762 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.196766 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.196770 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.196774 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.196777 | orchestrator | 2026-04-05 00:02:34.196781 | orchestrator | + block_device { 2026-04-05 00:02:34.196785 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.196789 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.196793 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.196796 | orchestrator | + multiattach = false 2026-04-05 00:02:34.196800 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.196804 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196808 | orchestrator | } 2026-04-05 00:02:34.196812 | orchestrator | 2026-04-05 00:02:34.196816 | orchestrator | + network { 2026-04-05 00:02:34.196819 | orchestrator | + access_network = false 2026-04-05 00:02:34.196823 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.196827 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.196831 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.196835 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.196838 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.196842 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.196846 | orchestrator | } 2026-04-05 00:02:34.196850 | orchestrator | } 2026-04-05 00:02:34.196856 | orchestrator | 2026-04-05 00:02:34.196860 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-05 00:02:34.196864 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.196867 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.196871 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.196875 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.196879 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.196882 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.196886 | orchestrator | + config_drive = true 2026-04-05 00:02:34.196890 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.196908 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.196913 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.196916 | orchestrator | + force_delete = false 2026-04-05 00:02:34.196920 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.196924 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.196928 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.196931 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.196935 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.196939 | orchestrator | + name = "testbed-node-3" 2026-04-05 00:02:34.196943 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.196946 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.196950 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.196954 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.196958 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.196962 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.196965 | orchestrator | 2026-04-05 00:02:34.196969 | orchestrator | + block_device { 2026-04-05 00:02:34.196976 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.196979 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.196983 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.196990 | orchestrator | + multiattach = false 2026-04-05 00:02:34.196994 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.196998 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197002 | orchestrator | } 2026-04-05 00:02:34.197006 | orchestrator | 2026-04-05 00:02:34.197009 | orchestrator | + network { 2026-04-05 00:02:34.197013 | orchestrator | + access_network = false 2026-04-05 00:02:34.197017 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.197021 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.197024 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.197028 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.197032 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.197036 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197040 | orchestrator | } 2026-04-05 00:02:34.197044 | orchestrator | } 2026-04-05 00:02:34.197049 | orchestrator | 2026-04-05 00:02:34.197053 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-05 00:02:34.197057 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.197061 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.197065 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.197069 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.197072 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.197076 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.197153 | orchestrator | + config_drive = true 2026-04-05 00:02:34.197159 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.197163 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.197167 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.197170 | orchestrator | + force_delete = false 2026-04-05 00:02:34.197174 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.197178 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197182 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.197186 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.197190 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.197194 | orchestrator | + name = "testbed-node-4" 2026-04-05 00:02:34.197197 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.197201 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197205 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.197209 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.197213 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.197216 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.197220 | orchestrator | 2026-04-05 00:02:34.197224 | orchestrator | + block_device { 2026-04-05 00:02:34.197228 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.197232 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.197235 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.197239 | orchestrator | + multiattach = false 2026-04-05 00:02:34.197243 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.197247 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197251 | orchestrator | } 2026-04-05 00:02:34.197255 | orchestrator | 2026-04-05 00:02:34.197258 | orchestrator | + network { 2026-04-05 00:02:34.197262 | orchestrator | + access_network = false 2026-04-05 00:02:34.197266 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.197270 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.197274 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.197277 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.197281 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.197285 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197289 | orchestrator | } 2026-04-05 00:02:34.197293 | orchestrator | } 2026-04-05 00:02:34.197303 | orchestrator | 2026-04-05 00:02:34.197307 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-05 00:02:34.197311 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:34.197315 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:34.197319 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:34.197323 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:34.197326 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.197330 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:34.197334 | orchestrator | + config_drive = true 2026-04-05 00:02:34.197338 | orchestrator | + created = (known after apply) 2026-04-05 00:02:34.197341 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:34.197345 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:34.197349 | orchestrator | + force_delete = false 2026-04-05 00:02:34.197356 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:34.197360 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197364 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:34.197367 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:34.197371 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:34.197375 | orchestrator | + name = "testbed-node-5" 2026-04-05 00:02:34.197379 | orchestrator | + power_state = "active" 2026-04-05 00:02:34.197383 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197386 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:34.197390 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:34.197394 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:34.197398 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:34.197402 | orchestrator | 2026-04-05 00:02:34.197405 | orchestrator | + block_device { 2026-04-05 00:02:34.197409 | orchestrator | + boot_index = 0 2026-04-05 00:02:34.197413 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:34.197417 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:34.197420 | orchestrator | + multiattach = false 2026-04-05 00:02:34.197424 | orchestrator | + source_type = "volume" 2026-04-05 00:02:34.197428 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197432 | orchestrator | } 2026-04-05 00:02:34.197436 | orchestrator | 2026-04-05 00:02:34.197439 | orchestrator | + network { 2026-04-05 00:02:34.197443 | orchestrator | + access_network = false 2026-04-05 00:02:34.197447 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:34.197451 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:34.197455 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:34.197458 | orchestrator | + name = (known after apply) 2026-04-05 00:02:34.197462 | orchestrator | + port = (known after apply) 2026-04-05 00:02:34.197466 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:34.197470 | orchestrator | } 2026-04-05 00:02:34.197474 | orchestrator | } 2026-04-05 00:02:34.197477 | orchestrator | 2026-04-05 00:02:34.197481 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-05 00:02:34.197485 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-05 00:02:34.197489 | orchestrator | + fingerprint = (known after apply) 2026-04-05 00:02:34.197493 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197496 | orchestrator | + name = "testbed" 2026-04-05 00:02:34.197500 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:34.197504 | orchestrator | + public_key = (known after apply) 2026-04-05 00:02:34.197508 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197512 | orchestrator | + user_id = (known after apply) 2026-04-05 00:02:34.197516 | orchestrator | } 2026-04-05 00:02:34.197519 | orchestrator | 2026-04-05 00:02:34.197523 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-05 00:02:34.197527 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197534 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197538 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197542 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197545 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197549 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197553 | orchestrator | } 2026-04-05 00:02:34.197559 | orchestrator | 2026-04-05 00:02:34.197563 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-05 00:02:34.197567 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197570 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197574 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197578 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197582 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197586 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197589 | orchestrator | } 2026-04-05 00:02:34.197593 | orchestrator | 2026-04-05 00:02:34.197597 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-05 00:02:34.197601 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197605 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197609 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197612 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197616 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197621 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197627 | orchestrator | } 2026-04-05 00:02:34.197633 | orchestrator | 2026-04-05 00:02:34.197639 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-05 00:02:34.197644 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197650 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197656 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197663 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197672 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197680 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197685 | orchestrator | } 2026-04-05 00:02:34.197691 | orchestrator | 2026-04-05 00:02:34.197697 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-05 00:02:34.197703 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197708 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197713 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197719 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197728 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197733 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197739 | orchestrator | } 2026-04-05 00:02:34.197745 | orchestrator | 2026-04-05 00:02:34.197750 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-05 00:02:34.197757 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197762 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197768 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197774 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197780 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197786 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197792 | orchestrator | } 2026-04-05 00:02:34.197798 | orchestrator | 2026-04-05 00:02:34.197806 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-05 00:02:34.197810 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197813 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197817 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197821 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197825 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197833 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197836 | orchestrator | } 2026-04-05 00:02:34.197840 | orchestrator | 2026-04-05 00:02:34.197844 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-05 00:02:34.197848 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197852 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197855 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197859 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197863 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197867 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197870 | orchestrator | } 2026-04-05 00:02:34.197874 | orchestrator | 2026-04-05 00:02:34.197878 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-05 00:02:34.197882 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:34.197886 | orchestrator | + device = (known after apply) 2026-04-05 00:02:34.197889 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197893 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:34.197908 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197912 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:34.197916 | orchestrator | } 2026-04-05 00:02:34.197920 | orchestrator | 2026-04-05 00:02:34.197924 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-05 00:02:34.197929 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-05 00:02:34.197932 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:34.197936 | orchestrator | + floating_ip = (known after apply) 2026-04-05 00:02:34.197940 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197944 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:34.197948 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.197951 | orchestrator | } 2026-04-05 00:02:34.197959 | orchestrator | 2026-04-05 00:02:34.197963 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-05 00:02:34.197967 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-05 00:02:34.197971 | orchestrator | + address = (known after apply) 2026-04-05 00:02:34.197975 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.197978 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:34.197982 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.197986 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:34.197990 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.197993 | orchestrator | + pool = "public" 2026-04-05 00:02:34.197997 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:34.198001 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198005 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198008 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198026 | orchestrator | } 2026-04-05 00:02:34.198031 | orchestrator | 2026-04-05 00:02:34.198035 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-05 00:02:34.198039 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-05 00:02:34.198043 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198047 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198050 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:34.198082 | orchestrator | + "nova", 2026-04-05 00:02:34.198086 | orchestrator | ] 2026-04-05 00:02:34.198090 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:34.198094 | orchestrator | + external = (known after apply) 2026-04-05 00:02:34.198098 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198102 | orchestrator | + mtu = (known after apply) 2026-04-05 00:02:34.198105 | orchestrator | + name = "net-testbed-management" 2026-04-05 00:02:34.198109 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198116 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198120 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198124 | orchestrator | + shared = (known after apply) 2026-04-05 00:02:34.198128 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198131 | orchestrator | + transparent_vlan = (known after apply) 2026-04-05 00:02:34.198135 | orchestrator | 2026-04-05 00:02:34.198139 | orchestrator | + segments (known after apply) 2026-04-05 00:02:34.198143 | orchestrator | } 2026-04-05 00:02:34.198147 | orchestrator | 2026-04-05 00:02:34.198150 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-05 00:02:34.198154 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-05 00:02:34.198158 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198162 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198166 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198172 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198176 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198180 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198184 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198187 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198191 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198195 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198199 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198203 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198206 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198210 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198214 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198218 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198221 | orchestrator | 2026-04-05 00:02:34.198225 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198229 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198233 | orchestrator | } 2026-04-05 00:02:34.198237 | orchestrator | 2026-04-05 00:02:34.198240 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198244 | orchestrator | 2026-04-05 00:02:34.198248 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198252 | orchestrator | + ip_address = "192.168.16.5" 2026-04-05 00:02:34.198256 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198261 | orchestrator | } 2026-04-05 00:02:34.198266 | orchestrator | } 2026-04-05 00:02:34.198272 | orchestrator | 2026-04-05 00:02:34.198279 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-05 00:02:34.198283 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.198287 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198291 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198295 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198299 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198302 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198306 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198310 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198313 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198317 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198321 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198324 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198328 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198332 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198336 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198343 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198346 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198350 | orchestrator | 2026-04-05 00:02:34.198354 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198358 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.198361 | orchestrator | } 2026-04-05 00:02:34.198365 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198369 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198373 | orchestrator | } 2026-04-05 00:02:34.198376 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198380 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.198384 | orchestrator | } 2026-04-05 00:02:34.198388 | orchestrator | 2026-04-05 00:02:34.198391 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198395 | orchestrator | 2026-04-05 00:02:34.198399 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198408 | orchestrator | + ip_address = "192.168.16.10" 2026-04-05 00:02:34.198412 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198416 | orchestrator | } 2026-04-05 00:02:34.198419 | orchestrator | } 2026-04-05 00:02:34.198423 | orchestrator | 2026-04-05 00:02:34.198427 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-05 00:02:34.198431 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.198434 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198438 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198442 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198445 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198449 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198453 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198457 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198460 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198464 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198468 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198472 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198475 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198479 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198483 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198486 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198490 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198494 | orchestrator | 2026-04-05 00:02:34.198498 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198501 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.198505 | orchestrator | } 2026-04-05 00:02:34.198509 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198513 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198516 | orchestrator | } 2026-04-05 00:02:34.198520 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198524 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.198527 | orchestrator | } 2026-04-05 00:02:34.198531 | orchestrator | 2026-04-05 00:02:34.198535 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198539 | orchestrator | 2026-04-05 00:02:34.198542 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198546 | orchestrator | + ip_address = "192.168.16.11" 2026-04-05 00:02:34.198550 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198554 | orchestrator | } 2026-04-05 00:02:34.198557 | orchestrator | } 2026-04-05 00:02:34.198561 | orchestrator | 2026-04-05 00:02:34.198565 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-05 00:02:34.198569 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.198572 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198576 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198580 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198584 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198590 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198594 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198598 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198602 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198608 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198612 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198616 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198619 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198623 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198627 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198631 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198634 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198638 | orchestrator | 2026-04-05 00:02:34.198642 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198646 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.198649 | orchestrator | } 2026-04-05 00:02:34.198653 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198657 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198661 | orchestrator | } 2026-04-05 00:02:34.198664 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198668 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.198672 | orchestrator | } 2026-04-05 00:02:34.198676 | orchestrator | 2026-04-05 00:02:34.198679 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198683 | orchestrator | 2026-04-05 00:02:34.198687 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198691 | orchestrator | + ip_address = "192.168.16.12" 2026-04-05 00:02:34.198694 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198698 | orchestrator | } 2026-04-05 00:02:34.198702 | orchestrator | } 2026-04-05 00:02:34.198706 | orchestrator | 2026-04-05 00:02:34.198709 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-05 00:02:34.198713 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.198717 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198721 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198727 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198732 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198736 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198740 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198744 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198747 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198751 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198755 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198759 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198762 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198766 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198770 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198774 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198777 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198781 | orchestrator | 2026-04-05 00:02:34.198785 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198789 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.198792 | orchestrator | } 2026-04-05 00:02:34.198796 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198800 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198804 | orchestrator | } 2026-04-05 00:02:34.198811 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198815 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.198818 | orchestrator | } 2026-04-05 00:02:34.198822 | orchestrator | 2026-04-05 00:02:34.198829 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198833 | orchestrator | 2026-04-05 00:02:34.198836 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198840 | orchestrator | + ip_address = "192.168.16.13" 2026-04-05 00:02:34.198844 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.198848 | orchestrator | } 2026-04-05 00:02:34.198851 | orchestrator | } 2026-04-05 00:02:34.198855 | orchestrator | 2026-04-05 00:02:34.198859 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-05 00:02:34.198863 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.198867 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.198870 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.198874 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.198878 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.198882 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.198885 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.198889 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.198893 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.198908 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.198912 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.198916 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.198920 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.198924 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.198927 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.198931 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.198935 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.198939 | orchestrator | 2026-04-05 00:02:34.198943 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198947 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.198951 | orchestrator | } 2026-04-05 00:02:34.198954 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198958 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.198962 | orchestrator | } 2026-04-05 00:02:34.198966 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.198970 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.198973 | orchestrator | } 2026-04-05 00:02:34.198977 | orchestrator | 2026-04-05 00:02:34.198981 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.198985 | orchestrator | 2026-04-05 00:02:34.198989 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.198992 | orchestrator | + ip_address = "192.168.16.14" 2026-04-05 00:02:34.198996 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.199000 | orchestrator | } 2026-04-05 00:02:34.199003 | orchestrator | } 2026-04-05 00:02:34.199007 | orchestrator | 2026-04-05 00:02:34.199011 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-05 00:02:34.199015 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:34.199019 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.199023 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:34.199026 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:34.199030 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.199034 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:34.199038 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:34.199041 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:34.199045 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:34.199049 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199052 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:34.199056 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.199060 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:34.199064 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:34.199072 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199076 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:34.199080 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199084 | orchestrator | 2026-04-05 00:02:34.199087 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.199091 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:34.199095 | orchestrator | } 2026-04-05 00:02:34.199099 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.199102 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:34.199106 | orchestrator | } 2026-04-05 00:02:34.199110 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:34.199114 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:34.199118 | orchestrator | } 2026-04-05 00:02:34.199121 | orchestrator | 2026-04-05 00:02:34.199128 | orchestrator | + binding (known after apply) 2026-04-05 00:02:34.199131 | orchestrator | 2026-04-05 00:02:34.199135 | orchestrator | + fixed_ip { 2026-04-05 00:02:34.199139 | orchestrator | + ip_address = "192.168.16.15" 2026-04-05 00:02:34.199143 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.199147 | orchestrator | } 2026-04-05 00:02:34.199150 | orchestrator | } 2026-04-05 00:02:34.199154 | orchestrator | 2026-04-05 00:02:34.199158 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-05 00:02:34.199162 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-05 00:02:34.199165 | orchestrator | + force_destroy = false 2026-04-05 00:02:34.199169 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199173 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:34.199177 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199181 | orchestrator | + router_id = (known after apply) 2026-04-05 00:02:34.199184 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:34.199188 | orchestrator | } 2026-04-05 00:02:34.199192 | orchestrator | 2026-04-05 00:02:34.199196 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-05 00:02:34.199199 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-05 00:02:34.199203 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:34.199207 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.199211 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:34.199214 | orchestrator | + "nova", 2026-04-05 00:02:34.199218 | orchestrator | ] 2026-04-05 00:02:34.199222 | orchestrator | + distributed = (known after apply) 2026-04-05 00:02:34.199226 | orchestrator | + enable_snat = (known after apply) 2026-04-05 00:02:34.199229 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-05 00:02:34.199233 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-05 00:02:34.199237 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199241 | orchestrator | + name = "testbed" 2026-04-05 00:02:34.199250 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199254 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199258 | orchestrator | 2026-04-05 00:02:34.199262 | orchestrator | + external_fixed_ip (known after apply) 2026-04-05 00:02:34.199266 | orchestrator | } 2026-04-05 00:02:34.199269 | orchestrator | 2026-04-05 00:02:34.199273 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-05 00:02:34.199277 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-05 00:02:34.199281 | orchestrator | + description = "ssh" 2026-04-05 00:02:34.199285 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199289 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199293 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199296 | orchestrator | + port_range_max = 22 2026-04-05 00:02:34.199300 | orchestrator | + port_range_min = 22 2026-04-05 00:02:34.199304 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:34.199308 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199314 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199318 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199322 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199326 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199329 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199333 | orchestrator | } 2026-04-05 00:02:34.199337 | orchestrator | 2026-04-05 00:02:34.199341 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-05 00:02:34.199345 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-05 00:02:34.199348 | orchestrator | + description = "wireguard" 2026-04-05 00:02:34.199352 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199356 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199360 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199364 | orchestrator | + port_range_max = 51820 2026-04-05 00:02:34.199367 | orchestrator | + port_range_min = 51820 2026-04-05 00:02:34.199371 | orchestrator | + protocol = "udp" 2026-04-05 00:02:34.199375 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199379 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199382 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199386 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199390 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199393 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199397 | orchestrator | } 2026-04-05 00:02:34.199401 | orchestrator | 2026-04-05 00:02:34.199405 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-05 00:02:34.199409 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-05 00:02:34.199412 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199416 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199420 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199424 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:34.199428 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199431 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199435 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199439 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:34.199443 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199446 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199450 | orchestrator | } 2026-04-05 00:02:34.199454 | orchestrator | 2026-04-05 00:02:34.199458 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-05 00:02:34.199462 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-05 00:02:34.199466 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199469 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199473 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199477 | orchestrator | + protocol = "udp" 2026-04-05 00:02:34.199481 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199484 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199488 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199492 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:34.199496 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199499 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199503 | orchestrator | } 2026-04-05 00:02:34.199507 | orchestrator | 2026-04-05 00:02:34.199511 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-05 00:02:34.199518 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-05 00:02:34.199522 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199525 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199529 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199533 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:34.199537 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199540 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199544 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199548 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199552 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199556 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199559 | orchestrator | } 2026-04-05 00:02:34.199563 | orchestrator | 2026-04-05 00:02:34.199567 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-05 00:02:34.199571 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-05 00:02:34.199575 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199578 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199585 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199589 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:34.199593 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199597 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199603 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199607 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199611 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199614 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199618 | orchestrator | } 2026-04-05 00:02:34.199622 | orchestrator | 2026-04-05 00:02:34.199626 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-05 00:02:34.199629 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-05 00:02:34.199633 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199637 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199641 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199645 | orchestrator | + protocol = "udp" 2026-04-05 00:02:34.199648 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199652 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199656 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199660 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199663 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199667 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199671 | orchestrator | } 2026-04-05 00:02:34.199675 | orchestrator | 2026-04-05 00:02:34.199679 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-05 00:02:34.199682 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-05 00:02:34.199686 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199692 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199696 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199700 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:34.199704 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199708 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199711 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199715 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199719 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199723 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199730 | orchestrator | } 2026-04-05 00:02:34.199734 | orchestrator | 2026-04-05 00:02:34.199737 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-05 00:02:34.199741 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-05 00:02:34.199745 | orchestrator | + description = "vrrp" 2026-04-05 00:02:34.199749 | orchestrator | + direction = "ingress" 2026-04-05 00:02:34.199752 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:34.199756 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199760 | orchestrator | + protocol = "112" 2026-04-05 00:02:34.199764 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199768 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:34.199771 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:34.199775 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:34.199779 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:34.199783 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199786 | orchestrator | } 2026-04-05 00:02:34.199790 | orchestrator | 2026-04-05 00:02:34.199794 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-05 00:02:34.199798 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-05 00:02:34.199802 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.199806 | orchestrator | + description = "management security group" 2026-04-05 00:02:34.199809 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199813 | orchestrator | + name = "testbed-management" 2026-04-05 00:02:34.199817 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199821 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:34.199824 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199828 | orchestrator | } 2026-04-05 00:02:34.199832 | orchestrator | 2026-04-05 00:02:34.199836 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-05 00:02:34.199840 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-05 00:02:34.199843 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.199847 | orchestrator | + description = "node security group" 2026-04-05 00:02:34.199851 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199855 | orchestrator | + name = "testbed-node" 2026-04-05 00:02:34.199859 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199862 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:34.199866 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199870 | orchestrator | } 2026-04-05 00:02:34.199874 | orchestrator | 2026-04-05 00:02:34.199877 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-05 00:02:34.199881 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-05 00:02:34.199885 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:34.199889 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-05 00:02:34.199893 | orchestrator | + dns_nameservers = [ 2026-04-05 00:02:34.199918 | orchestrator | + "8.8.8.8", 2026-04-05 00:02:34.199922 | orchestrator | + "9.9.9.9", 2026-04-05 00:02:34.199926 | orchestrator | ] 2026-04-05 00:02:34.199930 | orchestrator | + enable_dhcp = true 2026-04-05 00:02:34.199934 | orchestrator | + gateway_ip = (known after apply) 2026-04-05 00:02:34.199937 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.199941 | orchestrator | + ip_version = 4 2026-04-05 00:02:34.199945 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-05 00:02:34.199949 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-05 00:02:34.199953 | orchestrator | + name = "subnet-testbed-management" 2026-04-05 00:02:34.199956 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:34.199960 | orchestrator | + no_gateway = false 2026-04-05 00:02:34.199964 | orchestrator | + region = (known after apply) 2026-04-05 00:02:34.199970 | orchestrator | + service_types = (known after apply) 2026-04-05 00:02:34.199978 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:34.199982 | orchestrator | 2026-04-05 00:02:34.199986 | orchestrator | + allocation_pool { 2026-04-05 00:02:34.199990 | orchestrator | + end = "192.168.31.250" 2026-04-05 00:02:34.199993 | orchestrator | + start = "192.168.31.200" 2026-04-05 00:02:34.199997 | orchestrator | } 2026-04-05 00:02:34.200001 | orchestrator | } 2026-04-05 00:02:34.200005 | orchestrator | 2026-04-05 00:02:34.200009 | orchestrator | # terraform_data.image will be created 2026-04-05 00:02:34.200012 | orchestrator | + resource "terraform_data" "image" { 2026-04-05 00:02:34.200016 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.200020 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:34.200024 | orchestrator | + output = (known after apply) 2026-04-05 00:02:34.200028 | orchestrator | } 2026-04-05 00:02:34.200032 | orchestrator | 2026-04-05 00:02:34.200035 | orchestrator | # terraform_data.image_node will be created 2026-04-05 00:02:34.200039 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-05 00:02:34.200043 | orchestrator | + id = (known after apply) 2026-04-05 00:02:34.200047 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:34.200050 | orchestrator | + output = (known after apply) 2026-04-05 00:02:34.200054 | orchestrator | } 2026-04-05 00:02:34.200058 | orchestrator | 2026-04-05 00:02:34.200062 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-05 00:02:34.200066 | orchestrator | 2026-04-05 00:02:34.200069 | orchestrator | Changes to Outputs: 2026-04-05 00:02:34.200073 | orchestrator | + manager_address = (sensitive value) 2026-04-05 00:02:34.200077 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:34.450071 | orchestrator | terraform_data.image: Creating... 2026-04-05 00:02:34.450135 | orchestrator | terraform_data.image_node: Creating... 2026-04-05 00:02:34.450145 | orchestrator | terraform_data.image: Creation complete after 0s [id=6a19b98a-548b-6dde-eeee-277a273881bc] 2026-04-05 00:02:34.451168 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=b69217db-856d-e1ad-eb84-48747eb777ff] 2026-04-05 00:02:34.461973 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-05 00:02:34.469818 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-05 00:02:34.469886 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-05 00:02:34.470247 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-05 00:02:34.471492 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-05 00:02:34.474928 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-05 00:02:34.474968 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-05 00:02:34.475640 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-05 00:02:34.476992 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-05 00:02:34.484747 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-05 00:02:34.909501 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:34.917001 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-05 00:02:34.918672 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:34.922725 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-05 00:02:35.011099 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-05 00:02:35.018925 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-05 00:02:35.454674 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=cae904b1-ee15-4807-9e7a-c82bc86ec320] 2026-04-05 00:02:35.466339 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-05 00:02:38.101613 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=6aa9f314-df3a-4dde-8ae5-362160a07966] 2026-04-05 00:02:38.109328 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-05 00:02:38.116752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=1bba6e7f-491d-44c1-b292-643b4f29b95d] 2026-04-05 00:02:38.123459 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-05 00:02:38.144796 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=b0d5e8f5-5539-4914-ae8f-3a21993d2a92] 2026-04-05 00:02:38.146548 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=1177e3c7-06af-4e5c-a5c6-38f8cbd69f30] 2026-04-05 00:02:38.155392 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-05 00:02:38.159721 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-05 00:02:38.169876 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=bbb51bc2-5c72-44e5-9d02-9dee12b3d087] 2026-04-05 00:02:38.170375 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f1da7dba-c9cf-4b54-92a2-357ae45f4304] 2026-04-05 00:02:38.178298 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-05 00:02:38.181257 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-05 00:02:38.233949 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=33101796-df65-4afe-85e5-47b8cf02a1f2] 2026-04-05 00:02:38.237098 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=a133f214-06af-4f92-a3a2-2d6b80cceed9] 2026-04-05 00:02:38.254722 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-05 00:02:38.257038 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-05 00:02:38.261920 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=599320eacd5b82810fc5d4f7fb5714dfc1dd971b] 2026-04-05 00:02:38.263417 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=e0bc387ade04d0118ace34794d30b52a7f606f44] 2026-04-05 00:02:38.267681 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-05 00:02:38.288143 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=24ae3204-b804-4dec-a460-b72326a00767] 2026-04-05 00:02:38.863330 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc] 2026-04-05 00:02:40.146120 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=0e6844ee-1173-4619-8582-a192beb244a2] 2026-04-05 00:02:40.154385 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-05 00:02:41.602694 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=3843aef3-dc24-4830-b144-e5ace4620886] 2026-04-05 00:02:41.644753 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=6f8d9972-819f-4b60-a30c-3e1038c24698] 2026-04-05 00:02:41.660841 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=989468f1-c97d-420d-8f1d-aaccb4460869] 2026-04-05 00:02:41.711145 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=6b217fde-c53a-4bce-8d4f-676cb0a28367] 2026-04-05 00:02:41.741950 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=51f889b5-0c19-4400-81f2-c1867e388db0] 2026-04-05 00:02:41.776998 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=cff99bdd-b08e-41cb-b514-098ff9f837f7] 2026-04-05 00:02:43.233571 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=b88edf22-0f66-480e-9d6f-1296f4d7a2a7] 2026-04-05 00:02:43.247112 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-05 00:02:43.247728 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-05 00:02:43.247820 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-05 00:02:43.448058 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=64a95cce-6088-46fd-b178-05b2ce9a93bf] 2026-04-05 00:02:43.460067 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-05 00:02:43.464971 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-05 00:02:43.467899 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-05 00:02:43.470507 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-05 00:02:43.470762 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-05 00:02:43.472222 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-05 00:02:43.475732 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-05 00:02:43.476140 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-05 00:02:43.552290 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=e5480e61-788f-45f4-bfc1-560629cb4527] 2026-04-05 00:02:43.568122 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-05 00:02:43.643672 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=faf455de-100e-45fc-bd76-84fce48141ad] 2026-04-05 00:02:43.657296 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-05 00:02:43.807442 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=2d98668d-68b3-43f0-be79-d24e94af7cfc] 2026-04-05 00:02:43.813233 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-05 00:02:44.015100 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=23b9d282-d83b-4210-8d0c-b2b8664c35ab] 2026-04-05 00:02:44.021654 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-05 00:02:44.170622 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=7b8ae122-a7ba-4a7d-a3f8-57487625ccd5] 2026-04-05 00:02:44.181267 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-05 00:02:44.190665 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=698e800f-43ba-46f2-b732-d4eb9ba60aef] 2026-04-05 00:02:44.196500 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-05 00:02:44.238413 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=9ed36259-61e3-48e2-bdbc-79fd82730a48] 2026-04-05 00:02:44.243530 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-05 00:02:44.258273 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=bfdbeb62-5294-4214-a5a5-ac08b5cc22ba] 2026-04-05 00:02:44.266857 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-05 00:02:44.268243 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=3f9e16ba-e2aa-4c3c-b5d8-78368c1ad9bb] 2026-04-05 00:02:44.366966 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a05719fd-1274-4d8e-85bf-e43ae989583e] 2026-04-05 00:02:44.535719 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9131712c-f2a9-4ecb-8ba3-ba9a62e20d32] 2026-04-05 00:02:44.554685 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=822a6c5f-9797-4f9e-932e-cc9b22849501] 2026-04-05 00:02:44.778722 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=28c6a477-2a77-4b4a-8a3a-f42ce70ea4fd] 2026-04-05 00:02:44.795573 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=4b15b753-0114-424d-811d-7e8fe9301539] 2026-04-05 00:02:44.854135 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=5cccf19d-e496-4c6a-99fd-879f354d82eb] 2026-04-05 00:02:44.957607 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=167bb28a-419f-4f36-bff6-c19c81752eb9] 2026-04-05 00:02:45.097717 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3db35acb-37d4-4ce2-a2c9-9c46e8e6499d] 2026-04-05 00:02:46.399535 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=94914c16-739c-43a8-9331-3079adb2c8ab] 2026-04-05 00:02:46.420783 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-05 00:02:46.432470 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-05 00:02:46.438346 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-05 00:02:46.442293 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-05 00:02:46.444443 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-05 00:02:46.448194 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-05 00:02:46.451217 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-05 00:02:47.855720 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=fb956e68-eee5-40e9-8975-37f6439bb211] 2026-04-05 00:02:47.870489 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-05 00:02:47.872695 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-05 00:02:47.873866 | orchestrator | local_file.inventory: Creating... 2026-04-05 00:02:47.878304 | orchestrator | local_file.inventory: Creation complete after 0s [id=08055672765f9ee2d12cf455a3f897bec500fe9b] 2026-04-05 00:02:47.881373 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=13b2e898931ab930bbd31bf358aabdef0a12727e] 2026-04-05 00:02:48.798893 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=fb956e68-eee5-40e9-8975-37f6439bb211] 2026-04-05 00:02:56.438423 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-05 00:02:56.442415 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-05 00:02:56.445701 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-05 00:02:56.448194 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-05 00:02:56.452433 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-05 00:02:56.452590 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-05 00:03:06.446276 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-05 00:03:06.446365 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-05 00:03:06.446388 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-05 00:03:06.448481 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-05 00:03:06.452687 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-05 00:03:06.452746 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-05 00:03:07.120621 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=6c083648-0ed8-439d-857d-afd850d3a227] 2026-04-05 00:03:16.455557 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-05 00:03:16.455657 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-05 00:03:16.455667 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-05 00:03:16.455675 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-05 00:03:16.455692 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-05 00:03:17.180297 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=a0e22163-e25d-4536-8c06-77400fe3e2cd] 2026-04-05 00:03:17.209671 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5cdd855d-9e17-4163-aa3f-964b3ccf0de9] 2026-04-05 00:03:17.721864 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=5545d947-2ed7-48c8-ba25-0df7b0db60f9] 2026-04-05 00:03:26.455773 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-05 00:03:26.455918 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-05 00:03:36.456057 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-05 00:03:36.456143 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-04-05 00:03:37.267439 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=ce67118d-624e-487c-b72a-eebb5a85af33] 2026-04-05 00:03:37.476353 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=b3909c1d-26d3-465d-ab77-18f6f4b14e37] 2026-04-05 00:03:37.498804 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-05 00:03:37.498872 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-05 00:03:37.514666 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-05 00:03:37.517898 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-05 00:03:37.518219 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-05 00:03:37.518739 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-05 00:03:37.522557 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-05 00:03:37.522592 | orchestrator | null_resource.node_semaphore: Creation complete after 1s [id=5104373878853871120] 2026-04-05 00:03:37.530446 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-05 00:03:37.532767 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-05 00:03:37.539845 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-05 00:03:37.553423 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-05 00:03:40.926787 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=b3909c1d-26d3-465d-ab77-18f6f4b14e37/1177e3c7-06af-4e5c-a5c6-38f8cbd69f30] 2026-04-05 00:03:40.927483 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5cdd855d-9e17-4163-aa3f-964b3ccf0de9/1bba6e7f-491d-44c1-b292-643b4f29b95d] 2026-04-05 00:03:40.958493 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=b3909c1d-26d3-465d-ab77-18f6f4b14e37/6aa9f314-df3a-4dde-8ae5-362160a07966] 2026-04-05 00:03:40.959126 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=a0e22163-e25d-4536-8c06-77400fe3e2cd/b0d5e8f5-5539-4914-ae8f-3a21993d2a92] 2026-04-05 00:03:40.960768 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=5cdd855d-9e17-4163-aa3f-964b3ccf0de9/f1da7dba-c9cf-4b54-92a2-357ae45f4304] 2026-04-05 00:03:40.977642 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=a0e22163-e25d-4536-8c06-77400fe3e2cd/24ae3204-b804-4dec-a460-b72326a00767] 2026-04-05 00:03:47.057280 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=5cdd855d-9e17-4163-aa3f-964b3ccf0de9/a133f214-06af-4f92-a3a2-2d6b80cceed9] 2026-04-05 00:03:47.057676 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=b3909c1d-26d3-465d-ab77-18f6f4b14e37/bbb51bc2-5c72-44e5-9d02-9dee12b3d087] 2026-04-05 00:03:47.088379 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=a0e22163-e25d-4536-8c06-77400fe3e2cd/33101796-df65-4afe-85e5-47b8cf02a1f2] 2026-04-05 00:03:47.553900 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-05 00:03:57.563124 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-05 00:03:58.514269 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=9a1f2c89-8cf8-4d2f-baa4-7f2e681a114b] 2026-04-05 00:03:58.569219 | orchestrator | 2026-04-05 00:03:58.569317 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-05 00:03:58.569336 | orchestrator | 2026-04-05 00:03:58.569363 | orchestrator | Outputs: 2026-04-05 00:03:58.569371 | orchestrator | 2026-04-05 00:03:58.569378 | orchestrator | manager_address = 2026-04-05 00:03:58.569387 | orchestrator | private_key = 2026-04-05 00:03:58.691733 | orchestrator | ok: Runtime: 0:01:31.108931 2026-04-05 00:03:58.726708 | 2026-04-05 00:03:58.726867 | TASK [Fetch manager address] 2026-04-05 00:03:59.213083 | orchestrator | ok 2026-04-05 00:03:59.220773 | 2026-04-05 00:03:59.220898 | TASK [Set manager_host address] 2026-04-05 00:03:59.303024 | orchestrator | ok 2026-04-05 00:03:59.312180 | 2026-04-05 00:03:59.312462 | LOOP [Update ansible collections] 2026-04-05 00:04:00.199602 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:04:00.199991 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:04:00.200050 | orchestrator | Starting galaxy collection install process 2026-04-05 00:04:00.200077 | orchestrator | Process install dependency map 2026-04-05 00:04:00.200099 | orchestrator | Starting collection install process 2026-04-05 00:04:00.200119 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-04-05 00:04:00.200141 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-04-05 00:04:00.200166 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-05 00:04:00.200219 | orchestrator | ok: Item: commons Runtime: 0:00:00.585250 2026-04-05 00:04:01.137861 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:04:01.138041 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:04:01.138098 | orchestrator | Starting galaxy collection install process 2026-04-05 00:04:01.138143 | orchestrator | Process install dependency map 2026-04-05 00:04:01.138184 | orchestrator | Starting collection install process 2026-04-05 00:04:01.138222 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-04-05 00:04:01.138261 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-04-05 00:04:01.138301 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-05 00:04:01.138363 | orchestrator | ok: Item: services Runtime: 0:00:00.658866 2026-04-05 00:04:01.163281 | 2026-04-05 00:04:01.163443 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:04:14.811586 | orchestrator | ok 2026-04-05 00:04:14.821355 | 2026-04-05 00:04:14.821490 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:05:14.857666 | orchestrator | ok 2026-04-05 00:05:14.867884 | 2026-04-05 00:05:14.868001 | TASK [Fetch manager ssh hostkey] 2026-04-05 00:05:16.453441 | orchestrator | Output suppressed because no_log was given 2026-04-05 00:05:16.469634 | 2026-04-05 00:05:16.469805 | TASK [Get ssh keypair from terraform environment] 2026-04-05 00:05:17.006108 | orchestrator | ok: Runtime: 0:00:00.006843 2026-04-05 00:05:17.021096 | 2026-04-05 00:05:17.021277 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:05:17.069891 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-05 00:05:17.081491 | 2026-04-05 00:05:17.081628 | TASK [Run manager part 0] 2026-04-05 00:05:17.878436 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:05:17.923213 | orchestrator | 2026-04-05 00:05:17.923273 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-05 00:05:17.923284 | orchestrator | 2026-04-05 00:05:17.923302 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-05 00:05:21.926728 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:21.926781 | orchestrator | 2026-04-05 00:05:21.926801 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:05:21.926811 | orchestrator | 2026-04-05 00:05:21.926820 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:05:23.986840 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:23.987007 | orchestrator | 2026-04-05 00:05:23.987028 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:05:24.684242 | orchestrator | ok: [testbed-manager] 2026-04-05 00:05:24.684348 | orchestrator | 2026-04-05 00:05:24.684370 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:05:24.731819 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:24.731884 | orchestrator | 2026-04-05 00:05:24.731896 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-05 00:05:24.761271 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:24.761367 | orchestrator | 2026-04-05 00:05:24.761386 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-05 00:05:24.794654 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:05:24.794745 | orchestrator | 2026-04-05 00:05:24.794760 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-05 00:05:25.574353 | orchestrator | changed: [testbed-manager] 2026-04-05 00:05:25.574451 | orchestrator | 2026-04-05 00:05:25.574479 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-05 00:08:51.137416 | orchestrator | changed: [testbed-manager] 2026-04-05 00:08:51.137497 | orchestrator | 2026-04-05 00:08:51.137516 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:10:30.304689 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:30.304734 | orchestrator | 2026-04-05 00:10:30.304745 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-05 00:10:55.990722 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:55.990791 | orchestrator | 2026-04-05 00:10:55.990800 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-05 00:11:05.154201 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:05.154320 | orchestrator | 2026-04-05 00:11:05.154333 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:11:05.194788 | orchestrator | ok: [testbed-manager] 2026-04-05 00:11:05.194848 | orchestrator | 2026-04-05 00:11:05.194860 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-05 00:11:05.983079 | orchestrator | ok: [testbed-manager] 2026-04-05 00:11:05.983173 | orchestrator | 2026-04-05 00:11:05.983189 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-05 00:11:06.719701 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:06.719777 | orchestrator | 2026-04-05 00:11:06.719791 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-05 00:11:13.327047 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:13.327122 | orchestrator | 2026-04-05 00:11:13.327133 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-05 00:11:19.347363 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:19.347443 | orchestrator | 2026-04-05 00:11:19.347455 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-05 00:11:22.126606 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:22.126701 | orchestrator | 2026-04-05 00:11:22.126718 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-05 00:11:23.925766 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:23.925811 | orchestrator | 2026-04-05 00:11:23.925820 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-05 00:11:25.094498 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:11:25.094596 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:11:25.094611 | orchestrator | 2026-04-05 00:11:25.094629 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-05 00:11:25.137233 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:11:25.137282 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:11:25.137288 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:11:25.137294 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:11:28.438493 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:11:28.438585 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:11:28.438601 | orchestrator | 2026-04-05 00:11:28.438615 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-05 00:11:29.029849 | orchestrator | changed: [testbed-manager] 2026-04-05 00:11:29.029891 | orchestrator | 2026-04-05 00:11:29.029899 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-05 00:13:53.124902 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-05 00:13:53.124996 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-05 00:13:53.125013 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-05 00:13:53.125025 | orchestrator | 2026-04-05 00:13:53.125037 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-05 00:13:55.554900 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-05 00:13:55.555013 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-05 00:13:55.555037 | orchestrator | 2026-04-05 00:13:55.555059 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-05 00:13:55.555081 | orchestrator | 2026-04-05 00:13:55.555101 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:13:57.020333 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:57.020452 | orchestrator | 2026-04-05 00:13:57.020471 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:13:57.070284 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:57.070343 | orchestrator | 2026-04-05 00:13:57.070352 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:13:57.143230 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:57.143287 | orchestrator | 2026-04-05 00:13:57.143294 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:13:57.939150 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:57.939202 | orchestrator | 2026-04-05 00:13:57.939213 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:13:58.697992 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:58.698136 | orchestrator | 2026-04-05 00:13:58.698150 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:14:00.150918 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-05 00:14:00.150976 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-05 00:14:00.150989 | orchestrator | 2026-04-05 00:14:00.151001 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:14:01.554266 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:01.554359 | orchestrator | 2026-04-05 00:14:01.554375 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:14:03.443182 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:14:03.443289 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-05 00:14:03.443331 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:14:03.443353 | orchestrator | 2026-04-05 00:14:03.443376 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:14:03.504007 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:03.504068 | orchestrator | 2026-04-05 00:14:03.504076 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:14:03.582598 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:03.582691 | orchestrator | 2026-04-05 00:14:03.582708 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:14:04.179568 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:04.179606 | orchestrator | 2026-04-05 00:14:04.179619 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:14:04.254177 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:04.254261 | orchestrator | 2026-04-05 00:14:04.254277 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:14:05.173266 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:14:05.173453 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:05.173473 | orchestrator | 2026-04-05 00:14:05.173487 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:14:05.207468 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:05.207545 | orchestrator | 2026-04-05 00:14:05.207559 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:14:05.243072 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:05.243376 | orchestrator | 2026-04-05 00:14:05.243399 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:14:05.274655 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:05.274722 | orchestrator | 2026-04-05 00:14:05.274733 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:14:05.360055 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:05.360158 | orchestrator | 2026-04-05 00:14:05.360165 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:14:06.098620 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:06.098867 | orchestrator | 2026-04-05 00:14:06.098892 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:14:06.098906 | orchestrator | 2026-04-05 00:14:06.098920 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:14:07.584684 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:07.584719 | orchestrator | 2026-04-05 00:14:07.584726 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-05 00:14:08.596984 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:08.597047 | orchestrator | 2026-04-05 00:14:08.597061 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:14:08.597074 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-05 00:14:08.597086 | orchestrator | 2026-04-05 00:14:08.948704 | orchestrator | ok: Runtime: 0:08:51.334282 2026-04-05 00:14:08.965709 | 2026-04-05 00:14:08.965859 | TASK [Point out that the log in on the manager is now possible] 2026-04-05 00:14:09.013222 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-05 00:14:09.023975 | 2026-04-05 00:14:09.024163 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:14:09.058939 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-05 00:14:09.070740 | 2026-04-05 00:14:09.070905 | TASK [Run manager part 1 + 2] 2026-04-05 00:14:09.936970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:14:09.990683 | orchestrator | 2026-04-05 00:14:09.990732 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-05 00:14:09.990739 | orchestrator | 2026-04-05 00:14:09.990750 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:14:13.033149 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:13.033202 | orchestrator | 2026-04-05 00:14:13.033225 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-05 00:14:13.065396 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:13.065440 | orchestrator | 2026-04-05 00:14:13.065448 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:14:13.111433 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:13.111709 | orchestrator | 2026-04-05 00:14:13.111728 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:14:13.165352 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:13.165409 | orchestrator | 2026-04-05 00:14:13.165420 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:14:13.242259 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:13.242313 | orchestrator | 2026-04-05 00:14:13.242322 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:14:13.295529 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:13.295575 | orchestrator | 2026-04-05 00:14:13.295583 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:14:13.339124 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-05 00:14:13.339175 | orchestrator | 2026-04-05 00:14:13.339181 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:14:14.087715 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:14.087859 | orchestrator | 2026-04-05 00:14:14.087878 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:14:14.140106 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:14.140185 | orchestrator | 2026-04-05 00:14:14.140202 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:14:15.604699 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:15.604832 | orchestrator | 2026-04-05 00:14:15.604855 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:14:16.197348 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:16.197440 | orchestrator | 2026-04-05 00:14:16.197457 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:14:17.439314 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:17.439667 | orchestrator | 2026-04-05 00:14:17.439688 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:14:33.837985 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:33.838117 | orchestrator | 2026-04-05 00:14:33.838133 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:14:34.578054 | orchestrator | ok: [testbed-manager] 2026-04-05 00:14:34.578094 | orchestrator | 2026-04-05 00:14:34.578104 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:14:34.632586 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:34.632623 | orchestrator | 2026-04-05 00:14:34.632631 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-05 00:14:35.607839 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:35.607884 | orchestrator | 2026-04-05 00:14:35.607892 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-05 00:14:36.624697 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:36.624809 | orchestrator | 2026-04-05 00:14:36.624828 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-05 00:14:37.213705 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:37.213788 | orchestrator | 2026-04-05 00:14:37.213799 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-05 00:14:37.253291 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:14:37.253409 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:14:37.253426 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:14:37.253439 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:14:39.584950 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:39.585021 | orchestrator | 2026-04-05 00:14:39.585030 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-05 00:14:48.404438 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-05 00:14:48.404475 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-05 00:14:48.404482 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-05 00:14:48.404487 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-05 00:14:48.404495 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-05 00:14:48.404499 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-05 00:14:48.404504 | orchestrator | 2026-04-05 00:14:48.404509 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-05 00:14:49.528297 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:49.528338 | orchestrator | 2026-04-05 00:14:49.528346 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-05 00:14:52.733872 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:52.733938 | orchestrator | 2026-04-05 00:14:52.733949 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-05 00:14:52.770976 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:52.771026 | orchestrator | 2026-04-05 00:14:52.771032 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-05 00:16:50.811014 | orchestrator | changed: [testbed-manager] 2026-04-05 00:16:50.811251 | orchestrator | 2026-04-05 00:16:50.811290 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:16:52.311433 | orchestrator | ok: [testbed-manager] 2026-04-05 00:16:52.311519 | orchestrator | 2026-04-05 00:16:52.311537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:16:52.311551 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-05 00:16:52.311564 | orchestrator | 2026-04-05 00:16:52.714724 | orchestrator | ok: Runtime: 0:02:43.028171 2026-04-05 00:16:52.733750 | 2026-04-05 00:16:52.733931 | TASK [Reboot manager] 2026-04-05 00:16:54.275816 | orchestrator | ok: Runtime: 0:00:01.041338 2026-04-05 00:16:54.292974 | 2026-04-05 00:16:54.293150 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:17:10.763282 | orchestrator | ok 2026-04-05 00:17:10.773197 | 2026-04-05 00:17:10.773323 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:18:10.826264 | orchestrator | ok 2026-04-05 00:18:10.835153 | 2026-04-05 00:18:10.835311 | TASK [Deploy manager + bootstrap nodes] 2026-04-05 00:18:13.407190 | orchestrator | 2026-04-05 00:18:13.407371 | orchestrator | # DEPLOY MANAGER 2026-04-05 00:18:13.407393 | orchestrator | 2026-04-05 00:18:13.407406 | orchestrator | + set -e 2026-04-05 00:18:13.407419 | orchestrator | + echo 2026-04-05 00:18:13.407431 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-05 00:18:13.407446 | orchestrator | + echo 2026-04-05 00:18:13.407492 | orchestrator | + cat /opt/manager-vars.sh 2026-04-05 00:18:13.410512 | orchestrator | export NUMBER_OF_NODES=6 2026-04-05 00:18:13.410550 | orchestrator | 2026-04-05 00:18:13.410562 | orchestrator | export CEPH_VERSION= 2026-04-05 00:18:13.410573 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-05 00:18:13.410585 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-05 00:18:13.410595 | orchestrator | export OPENSTACK_VERSION= 2026-04-05 00:18:13.410605 | orchestrator | 2026-04-05 00:18:13.410615 | orchestrator | export ARA=false 2026-04-05 00:18:13.410671 | orchestrator | export DEPLOY_MODE=manager 2026-04-05 00:18:13.410682 | orchestrator | export TEMPEST=true 2026-04-05 00:18:13.410692 | orchestrator | export IS_ZUUL=true 2026-04-05 00:18:13.410709 | orchestrator | 2026-04-05 00:18:13.410726 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:18:13.410737 | orchestrator | export EXTERNAL_API=false 2026-04-05 00:18:13.410746 | orchestrator | 2026-04-05 00:18:13.410762 | orchestrator | export IMAGE_USER=ubuntu 2026-04-05 00:18:13.410771 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:13.410781 | orchestrator | 2026-04-05 00:18:13.410794 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-05 00:18:13.410803 | orchestrator | 2026-04-05 00:18:13.410815 | orchestrator | + echo 2026-04-05 00:18:13.410832 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:18:13.411662 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:18:13.411688 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:18:13.411706 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:18:13.411721 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:18:13.411734 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:18:13.411745 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:18:13.411756 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:18:13.411765 | orchestrator | ++ export CEPH_VERSION= 2026-04-05 00:18:13.411774 | orchestrator | ++ CEPH_VERSION= 2026-04-05 00:18:13.411784 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:18:13.411794 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:18:13.411804 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 00:18:13.411814 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 00:18:13.411830 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-05 00:18:13.411857 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-05 00:18:13.411880 | orchestrator | ++ export ARA=false 2026-04-05 00:18:13.411896 | orchestrator | ++ ARA=false 2026-04-05 00:18:13.411911 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:18:13.411937 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:18:13.411953 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:18:13.411967 | orchestrator | ++ TEMPEST=true 2026-04-05 00:18:13.411983 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:18:13.411998 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:18:13.412014 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:18:13.412031 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:18:13.412048 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:18:13.412063 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:18:13.412077 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:18:13.412087 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:18:13.412097 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:13.412106 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:13.412116 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:18:13.412126 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:18:13.412135 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-05 00:18:13.466211 | orchestrator | + docker version 2026-04-05 00:18:13.581437 | orchestrator | Client: Docker Engine - Community 2026-04-05 00:18:13.581504 | orchestrator | Version: 27.5.1 2026-04-05 00:18:13.581511 | orchestrator | API version: 1.47 2026-04-05 00:18:13.581516 | orchestrator | Go version: go1.22.11 2026-04-05 00:18:13.581520 | orchestrator | Git commit: 9f9e405 2026-04-05 00:18:13.581524 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:18:13.581529 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:18:13.581533 | orchestrator | Context: default 2026-04-05 00:18:13.581537 | orchestrator | 2026-04-05 00:18:13.581541 | orchestrator | Server: Docker Engine - Community 2026-04-05 00:18:13.581545 | orchestrator | Engine: 2026-04-05 00:18:13.581549 | orchestrator | Version: 27.5.1 2026-04-05 00:18:13.581553 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-05 00:18:13.581578 | orchestrator | Go version: go1.22.11 2026-04-05 00:18:13.581582 | orchestrator | Git commit: 4c9b3b0 2026-04-05 00:18:13.581586 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:18:13.581590 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:18:13.581593 | orchestrator | Experimental: false 2026-04-05 00:18:13.581597 | orchestrator | containerd: 2026-04-05 00:18:13.581601 | orchestrator | Version: v2.2.2 2026-04-05 00:18:13.581605 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-05 00:18:13.581609 | orchestrator | runc: 2026-04-05 00:18:13.581613 | orchestrator | Version: 1.3.4 2026-04-05 00:18:13.581617 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-05 00:18:13.581679 | orchestrator | docker-init: 2026-04-05 00:18:13.581683 | orchestrator | Version: 0.19.0 2026-04-05 00:18:13.581687 | orchestrator | GitCommit: de40ad0 2026-04-05 00:18:13.585889 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-05 00:18:13.595048 | orchestrator | + set -e 2026-04-05 00:18:13.595113 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:18:13.595127 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:18:13.595136 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:18:13.595147 | orchestrator | ++ export CEPH_VERSION= 2026-04-05 00:18:13.595157 | orchestrator | ++ CEPH_VERSION= 2026-04-05 00:18:13.595171 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:18:13.595187 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:18:13.595200 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 00:18:13.595214 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 00:18:13.595226 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-05 00:18:13.595235 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-05 00:18:13.595244 | orchestrator | ++ export ARA=false 2026-04-05 00:18:13.595253 | orchestrator | ++ ARA=false 2026-04-05 00:18:13.595262 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:18:13.595271 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:18:13.595279 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:18:13.595288 | orchestrator | ++ TEMPEST=true 2026-04-05 00:18:13.595297 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:18:13.595306 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:18:13.595315 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:18:13.595324 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:18:13.595344 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:18:13.595362 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:18:13.595371 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:18:13.595380 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:18:13.595389 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:13.595398 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:18:13.595406 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:18:13.595414 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:18:13.595422 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:18:13.595430 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:18:13.595438 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:18:13.595446 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:18:13.595459 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:18:13.595468 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-05 00:18:13.595476 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-05 00:18:13.602086 | orchestrator | + set -e 2026-04-05 00:18:13.602128 | orchestrator | + VERSION=10.0.0 2026-04-05 00:18:13.602135 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:13.611609 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-05 00:18:13.611745 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:13.614206 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:18:13.616611 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-05 00:18:13.621773 | orchestrator | /opt/configuration ~ 2026-04-05 00:18:13.621841 | orchestrator | + set -e 2026-04-05 00:18:13.621855 | orchestrator | + pushd /opt/configuration 2026-04-05 00:18:13.621866 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:18:13.623138 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 00:18:13.624090 | orchestrator | ++ deactivate nondestructive 2026-04-05 00:18:13.624123 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:13.624136 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:13.624149 | orchestrator | ++ hash -r 2026-04-05 00:18:13.624190 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:13.624204 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 00:18:13.624224 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 00:18:13.624236 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 00:18:13.624247 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 00:18:13.624258 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 00:18:13.624269 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:13.624280 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:13.624293 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:13.624534 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:13.624553 | orchestrator | ++ export PATH 2026-04-05 00:18:13.624564 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:13.624575 | orchestrator | ++ '[' -z '' ']' 2026-04-05 00:18:13.624585 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 00:18:13.624596 | orchestrator | ++ PS1='(venv) ' 2026-04-05 00:18:13.624606 | orchestrator | ++ export PS1 2026-04-05 00:18:13.624617 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 00:18:13.624656 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 00:18:13.624667 | orchestrator | ++ hash -r 2026-04-05 00:18:13.624678 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-05 00:18:14.882320 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-05 00:18:14.882943 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-05 00:18:14.884490 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-05 00:18:14.886261 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-05 00:18:14.888150 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-05 00:18:14.898613 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-05 00:18:14.900389 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-05 00:18:14.901724 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-05 00:18:14.903334 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-05 00:18:14.936875 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-05 00:18:14.938390 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-05 00:18:14.940317 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-05 00:18:14.941996 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-05 00:18:14.946232 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-05 00:18:15.168138 | orchestrator | ++ which gilt 2026-04-05 00:18:15.173377 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-05 00:18:15.173439 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-05 00:18:15.437675 | orchestrator | osism.cfg-generics: 2026-04-05 00:18:15.592647 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-05 00:18:15.592763 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-05 00:18:15.592794 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-05 00:18:15.592808 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-05 00:18:16.612084 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-05 00:18:16.622795 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-05 00:18:17.003034 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-05 00:18:17.054842 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:18:17.054941 | orchestrator | + deactivate 2026-04-05 00:18:17.054959 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 00:18:17.054973 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:17.054985 | orchestrator | + export PATH 2026-04-05 00:18:17.054996 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 00:18:17.055007 | orchestrator | + '[' -n '' ']' 2026-04-05 00:18:17.055019 | orchestrator | + hash -r 2026-04-05 00:18:17.055029 | orchestrator | + '[' -n '' ']' 2026-04-05 00:18:17.055040 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 00:18:17.055051 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 00:18:17.055062 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 00:18:17.055073 | orchestrator | + unset -f deactivate 2026-04-05 00:18:17.055084 | orchestrator | + popd 2026-04-05 00:18:17.055095 | orchestrator | ~ 2026-04-05 00:18:17.056952 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-05 00:18:17.057008 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-05 00:18:17.057095 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-05 00:18:17.112575 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:18:17.112704 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-05 00:18:17.113845 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-05 00:18:17.195572 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:18:17.195711 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 00:18:17.201743 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 00:18:17.204142 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-05 00:18:17.299330 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:18:17.299442 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 00:18:17.299458 | orchestrator | ++ deactivate nondestructive 2026-04-05 00:18:17.299470 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:17.299481 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:17.299493 | orchestrator | ++ hash -r 2026-04-05 00:18:17.299504 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:17.299517 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 00:18:17.299528 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 00:18:17.299539 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 00:18:17.299551 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 00:18:17.299562 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 00:18:17.299573 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:17.299583 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 00:18:17.299595 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:17.299606 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:18:17.299645 | orchestrator | ++ export PATH 2026-04-05 00:18:17.299659 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:18:17.299670 | orchestrator | ++ '[' -z '' ']' 2026-04-05 00:18:17.299680 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 00:18:17.299691 | orchestrator | ++ PS1='(venv) ' 2026-04-05 00:18:17.299702 | orchestrator | ++ export PS1 2026-04-05 00:18:17.299713 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 00:18:17.299723 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 00:18:17.299734 | orchestrator | ++ hash -r 2026-04-05 00:18:17.299745 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-05 00:18:18.532326 | orchestrator | 2026-04-05 00:18:18.532443 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-05 00:18:18.532465 | orchestrator | 2026-04-05 00:18:18.532481 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:18:19.157976 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:19.158131 | orchestrator | 2026-04-05 00:18:19.158148 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:18:20.125749 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:20.125853 | orchestrator | 2026-04-05 00:18:20.125869 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-05 00:18:20.125881 | orchestrator | 2026-04-05 00:18:20.125893 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:18:22.523554 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:22.523685 | orchestrator | 2026-04-05 00:18:22.523700 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-05 00:18:22.578307 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:22.578393 | orchestrator | 2026-04-05 00:18:22.578410 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-05 00:18:23.087609 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:23.087766 | orchestrator | 2026-04-05 00:18:23.087792 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-05 00:18:23.122719 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:23.123013 | orchestrator | 2026-04-05 00:18:23.123032 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:18:23.488882 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:23.489050 | orchestrator | 2026-04-05 00:18:23.489080 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-05 00:18:23.834363 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:23.834460 | orchestrator | 2026-04-05 00:18:23.834475 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-05 00:18:23.947995 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:23.948119 | orchestrator | 2026-04-05 00:18:23.948137 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-05 00:18:23.948150 | orchestrator | 2026-04-05 00:18:23.948162 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:18:25.758909 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:25.759036 | orchestrator | 2026-04-05 00:18:25.759063 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-05 00:18:25.858346 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-05 00:18:25.858445 | orchestrator | 2026-04-05 00:18:25.858462 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-05 00:18:25.919988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-05 00:18:25.920097 | orchestrator | 2026-04-05 00:18:25.920115 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-05 00:18:27.201234 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-05 00:18:27.201340 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-05 00:18:27.201356 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-05 00:18:27.201368 | orchestrator | 2026-04-05 00:18:27.201380 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-05 00:18:29.334948 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-05 00:18:29.335064 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-05 00:18:29.335083 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-05 00:18:29.335097 | orchestrator | 2026-04-05 00:18:29.335111 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-05 00:18:30.046768 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:30.046890 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:30.046913 | orchestrator | 2026-04-05 00:18:30.046930 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-05 00:18:30.802163 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:30.802262 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:30.802276 | orchestrator | 2026-04-05 00:18:30.802288 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-05 00:18:30.849776 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:30.849860 | orchestrator | 2026-04-05 00:18:30.849871 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-05 00:18:31.267529 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:31.267701 | orchestrator | 2026-04-05 00:18:31.267722 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-05 00:18:31.351994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-05 00:18:31.352107 | orchestrator | 2026-04-05 00:18:31.352124 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-05 00:18:32.622320 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:32.721365 | orchestrator | 2026-04-05 00:18:32.721428 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-05 00:18:33.562928 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:33.563029 | orchestrator | 2026-04-05 00:18:33.563044 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-05 00:18:43.371976 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:43.372087 | orchestrator | 2026-04-05 00:18:43.372104 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-05 00:18:43.440418 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:43.440554 | orchestrator | 2026-04-05 00:18:43.440584 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-05 00:18:43.440662 | orchestrator | 2026-04-05 00:18:43.440686 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:18:45.465354 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:45.465458 | orchestrator | 2026-04-05 00:18:45.465475 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-05 00:18:45.583413 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-05 00:18:45.583564 | orchestrator | 2026-04-05 00:18:45.583583 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-05 00:18:45.662282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:18:45.662382 | orchestrator | 2026-04-05 00:18:45.662397 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-05 00:18:48.284817 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:48.284923 | orchestrator | 2026-04-05 00:18:48.284946 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-05 00:18:48.334852 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:48.334949 | orchestrator | 2026-04-05 00:18:48.334964 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-05 00:18:48.471053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-05 00:18:48.471158 | orchestrator | 2026-04-05 00:18:48.471195 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-05 00:18:51.442550 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-05 00:18:51.443585 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-05 00:18:51.443659 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-05 00:18:51.443671 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-05 00:18:51.443681 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-05 00:18:51.443692 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-05 00:18:51.443701 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-05 00:18:51.443712 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-05 00:18:51.443722 | orchestrator | 2026-04-05 00:18:51.443733 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-05 00:18:52.110239 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:52.110365 | orchestrator | 2026-04-05 00:18:52.110383 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-05 00:18:52.780722 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:52.780832 | orchestrator | 2026-04-05 00:18:52.780848 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-05 00:18:52.871214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-05 00:18:52.871310 | orchestrator | 2026-04-05 00:18:52.871318 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-05 00:18:54.149186 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-05 00:18:54.149266 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-05 00:18:54.149278 | orchestrator | 2026-04-05 00:18:54.149289 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-05 00:18:54.822488 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:54.822572 | orchestrator | 2026-04-05 00:18:54.822584 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-05 00:18:54.875093 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:54.875197 | orchestrator | 2026-04-05 00:18:54.875215 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-05 00:18:54.953004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-05 00:18:54.953099 | orchestrator | 2026-04-05 00:18:54.953114 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-05 00:18:55.627794 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:55.627900 | orchestrator | 2026-04-05 00:18:55.627918 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-05 00:18:55.711340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-05 00:18:55.711452 | orchestrator | 2026-04-05 00:18:55.711469 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-05 00:18:57.118592 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:57.118790 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:18:57.118808 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:57.118821 | orchestrator | 2026-04-05 00:18:57.118833 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-05 00:18:57.838548 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:57.838693 | orchestrator | 2026-04-05 00:18:57.838712 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-05 00:18:57.877471 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:57.877549 | orchestrator | 2026-04-05 00:18:57.877558 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-05 00:18:57.978281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-05 00:18:57.978386 | orchestrator | 2026-04-05 00:18:57.978401 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-05 00:18:58.530387 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:58.530505 | orchestrator | 2026-04-05 00:18:58.530522 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-05 00:18:58.985196 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:58.985294 | orchestrator | 2026-04-05 00:18:58.985309 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-05 00:19:00.239959 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-05 00:19:00.240049 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-05 00:19:00.240060 | orchestrator | 2026-04-05 00:19:00.240066 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-05 00:19:00.918932 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:00.919059 | orchestrator | 2026-04-05 00:19:00.919087 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-05 00:19:01.343887 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:01.344017 | orchestrator | 2026-04-05 00:19:01.344046 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-05 00:19:01.762821 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:01.762920 | orchestrator | 2026-04-05 00:19:01.762936 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-05 00:19:01.807101 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:01.807198 | orchestrator | 2026-04-05 00:19:01.807221 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-05 00:19:01.879826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-05 00:19:01.879907 | orchestrator | 2026-04-05 00:19:01.879917 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-05 00:19:01.935240 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:01.935342 | orchestrator | 2026-04-05 00:19:01.935358 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-05 00:19:04.078574 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-05 00:19:04.078802 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-05 00:19:04.078821 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-05 00:19:04.078832 | orchestrator | 2026-04-05 00:19:04.078843 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-05 00:19:04.792542 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:04.792662 | orchestrator | 2026-04-05 00:19:04.792674 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-05 00:19:05.495085 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:05.495189 | orchestrator | 2026-04-05 00:19:05.495206 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-05 00:19:06.244089 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:06.244210 | orchestrator | 2026-04-05 00:19:06.244234 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-05 00:19:06.324922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-05 00:19:06.325050 | orchestrator | 2026-04-05 00:19:06.325077 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-05 00:19:06.382095 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:06.382193 | orchestrator | 2026-04-05 00:19:06.382210 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-05 00:19:07.125589 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-05 00:19:07.125769 | orchestrator | 2026-04-05 00:19:07.125785 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-05 00:19:07.255410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-05 00:19:07.255504 | orchestrator | 2026-04-05 00:19:07.255517 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-05 00:19:07.983352 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:07.983482 | orchestrator | 2026-04-05 00:19:07.983506 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-05 00:19:08.631171 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:08.631291 | orchestrator | 2026-04-05 00:19:08.631306 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-05 00:19:08.680280 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:08.680406 | orchestrator | 2026-04-05 00:19:08.680460 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-05 00:19:08.744317 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:08.744412 | orchestrator | 2026-04-05 00:19:08.744427 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-05 00:19:09.580411 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:09.580538 | orchestrator | 2026-04-05 00:19:09.580564 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-05 00:20:30.706252 | orchestrator | changed: [testbed-manager] 2026-04-05 00:20:30.706356 | orchestrator | 2026-04-05 00:20:30.706367 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-05 00:20:31.817053 | orchestrator | ok: [testbed-manager] 2026-04-05 00:20:31.817152 | orchestrator | 2026-04-05 00:20:31.817167 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-05 00:20:31.885382 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:20:31.885467 | orchestrator | 2026-04-05 00:20:31.885478 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-05 00:20:34.865231 | orchestrator | changed: [testbed-manager] 2026-04-05 00:20:34.865306 | orchestrator | 2026-04-05 00:20:34.865316 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-05 00:20:35.009933 | orchestrator | ok: [testbed-manager] 2026-04-05 00:20:35.010091 | orchestrator | 2026-04-05 00:20:35.010107 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:20:35.010117 | orchestrator | 2026-04-05 00:20:35.010127 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-05 00:20:35.073633 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:20:35.073727 | orchestrator | 2026-04-05 00:20:35.073740 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-05 00:21:35.121742 | orchestrator | Pausing for 60 seconds 2026-04-05 00:21:35.121839 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:35.121851 | orchestrator | 2026-04-05 00:21:35.121860 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-05 00:21:38.792943 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:38.793033 | orchestrator | 2026-04-05 00:21:38.793049 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-05 00:22:40.876289 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-05 00:22:40.876410 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-05 00:22:40.876426 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-05 00:22:40.876447 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:40.876468 | orchestrator | 2026-04-05 00:22:40.876489 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-05 00:22:47.249331 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:47.249450 | orchestrator | 2026-04-05 00:22:47.249467 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-05 00:22:47.336208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-05 00:22:47.336303 | orchestrator | 2026-04-05 00:22:47.336318 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:22:47.336331 | orchestrator | 2026-04-05 00:22:47.336342 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-05 00:22:47.378731 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:47.378832 | orchestrator | 2026-04-05 00:22:47.378855 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-05 00:22:47.445149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-05 00:22:47.445244 | orchestrator | 2026-04-05 00:22:47.445260 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-05 00:22:48.267336 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:48.267448 | orchestrator | 2026-04-05 00:22:48.267468 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-05 00:22:51.574852 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:51.574940 | orchestrator | 2026-04-05 00:22:51.574957 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-05 00:22:51.651723 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:22:51.651800 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-05 00:22:51.651814 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-05 00:22:51.651825 | orchestrator | "Checking running containers against expected versions...", 2026-04-05 00:22:51.651837 | orchestrator | "", 2026-04-05 00:22:51.651848 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-05 00:22:51.651859 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-05 00:22:51.651870 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.651881 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-05 00:22:51.651892 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.651903 | orchestrator | "", 2026-04-05 00:22:51.651948 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-05 00:22:51.651960 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-05 00:22:51.651971 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.651982 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-05 00:22:51.651993 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652003 | orchestrator | "", 2026-04-05 00:22:51.652014 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-05 00:22:51.652025 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-05 00:22:51.652035 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652046 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-05 00:22:51.652057 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652067 | orchestrator | "", 2026-04-05 00:22:51.652078 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-05 00:22:51.652089 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-05 00:22:51.652099 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652110 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-05 00:22:51.652120 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652131 | orchestrator | "", 2026-04-05 00:22:51.652142 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-05 00:22:51.652152 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-05 00:22:51.652163 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652173 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-05 00:22:51.652184 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652195 | orchestrator | "", 2026-04-05 00:22:51.652205 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-05 00:22:51.652216 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652227 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652237 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652248 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652258 | orchestrator | "", 2026-04-05 00:22:51.652269 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-05 00:22:51.652280 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:22:51.652290 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652301 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:22:51.652312 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652323 | orchestrator | "", 2026-04-05 00:22:51.652334 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-05 00:22:51.652344 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:22:51.652355 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652365 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:22:51.652376 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652387 | orchestrator | "", 2026-04-05 00:22:51.652397 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-05 00:22:51.652408 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-05 00:22:51.652419 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652429 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-05 00:22:51.652439 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652450 | orchestrator | "", 2026-04-05 00:22:51.652461 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-05 00:22:51.652471 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:22:51.652482 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652492 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:22:51.652503 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652543 | orchestrator | "", 2026-04-05 00:22:51.652555 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-05 00:22:51.652565 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652576 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652587 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652602 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652613 | orchestrator | "", 2026-04-05 00:22:51.652624 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-05 00:22:51.652634 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652645 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652656 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652666 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652677 | orchestrator | "", 2026-04-05 00:22:51.652688 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-05 00:22:51.652699 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652710 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652720 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652731 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652742 | orchestrator | "", 2026-04-05 00:22:51.652752 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-05 00:22:51.652763 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652774 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652784 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652812 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652823 | orchestrator | "", 2026-04-05 00:22:51.652834 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-05 00:22:51.652845 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652856 | orchestrator | " Enabled: true", 2026-04-05 00:22:51.652866 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 00:22:51.652877 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:22:51.652888 | orchestrator | "", 2026-04-05 00:22:51.652898 | orchestrator | "=== Summary ===", 2026-04-05 00:22:51.652909 | orchestrator | "Errors (version mismatches): 0", 2026-04-05 00:22:51.652920 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-05 00:22:51.652931 | orchestrator | "", 2026-04-05 00:22:51.652941 | orchestrator | "✅ All running containers match expected versions!" 2026-04-05 00:22:51.652952 | orchestrator | ] 2026-04-05 00:22:51.652963 | orchestrator | } 2026-04-05 00:22:51.652974 | orchestrator | 2026-04-05 00:22:51.652985 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-05 00:22:51.689628 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:51.689710 | orchestrator | 2026-04-05 00:22:51.689724 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:22:51.689735 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-05 00:22:51.689745 | orchestrator | 2026-04-05 00:22:51.777556 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:22:51.777616 | orchestrator | + deactivate 2026-04-05 00:22:51.777626 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 00:22:51.777634 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:22:51.777642 | orchestrator | + export PATH 2026-04-05 00:22:51.777649 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 00:22:51.777656 | orchestrator | + '[' -n '' ']' 2026-04-05 00:22:51.777662 | orchestrator | + hash -r 2026-04-05 00:22:51.777669 | orchestrator | + '[' -n '' ']' 2026-04-05 00:22:51.777675 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 00:22:51.777682 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 00:22:51.777688 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 00:22:51.777695 | orchestrator | + unset -f deactivate 2026-04-05 00:22:51.777735 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-05 00:22:51.786887 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:22:51.786935 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:22:51.786941 | orchestrator | + local max_attempts=60 2026-04-05 00:22:51.786946 | orchestrator | + local name=ceph-ansible 2026-04-05 00:22:51.786950 | orchestrator | + local attempt_num=1 2026-04-05 00:22:51.787532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:22:51.810371 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:22:51.810418 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:22:51.810423 | orchestrator | + local max_attempts=60 2026-04-05 00:22:51.810428 | orchestrator | + local name=kolla-ansible 2026-04-05 00:22:51.810431 | orchestrator | + local attempt_num=1 2026-04-05 00:22:51.810435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:22:51.839900 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:22:51.839955 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:22:51.839965 | orchestrator | + local max_attempts=60 2026-04-05 00:22:51.839972 | orchestrator | + local name=osism-ansible 2026-04-05 00:22:51.839980 | orchestrator | + local attempt_num=1 2026-04-05 00:22:51.839987 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:22:51.868927 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:22:51.868981 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:22:51.868990 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:22:52.554470 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-05 00:22:52.728105 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-05 00:22:52.728165 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:22:52.728173 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:22:52.728179 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-05 00:22:52.728194 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-04-05 00:22:52.728199 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:22:52.728205 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:22:52.728211 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-05 00:22:52.728216 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:22:52.728222 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-05 00:22:52.728228 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:22:52.728233 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-05 00:22:52.728255 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:22:52.728266 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-05 00:22:52.728276 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-05 00:22:52.728287 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:22:52.732605 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-05 00:22:52.772363 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:22:52.772453 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-05 00:22:52.774349 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-05 00:23:05.610998 | orchestrator | 2026-04-05 00:23:05 | INFO  | Prepare task for execution of resolvconf. 2026-04-05 00:23:05.833490 | orchestrator | 2026-04-05 00:23:05 | INFO  | Task 90c75a7d-cb90-4bfc-b523-e5339371b253 (resolvconf) was prepared for execution. 2026-04-05 00:23:05.833655 | orchestrator | 2026-04-05 00:23:05 | INFO  | It takes a moment until task 90c75a7d-cb90-4bfc-b523-e5339371b253 (resolvconf) has been started and output is visible here. 2026-04-05 00:23:20.670238 | orchestrator | 2026-04-05 00:23:20.670351 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-05 00:23:20.670370 | orchestrator | 2026-04-05 00:23:20.670391 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:23:20.670410 | orchestrator | Sunday 05 April 2026 00:23:09 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-04-05 00:23:20.670429 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:20.670448 | orchestrator | 2026-04-05 00:23:20.670463 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:23:20.670484 | orchestrator | Sunday 05 April 2026 00:23:13 +0000 (0:00:04.111) 0:00:04.302 ********** 2026-04-05 00:23:20.670596 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:20.670619 | orchestrator | 2026-04-05 00:23:20.670635 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:23:20.670647 | orchestrator | Sunday 05 April 2026 00:23:13 +0000 (0:00:00.077) 0:00:04.380 ********** 2026-04-05 00:23:20.670658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-05 00:23:20.670670 | orchestrator | 2026-04-05 00:23:20.670681 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:23:20.670692 | orchestrator | Sunday 05 April 2026 00:23:13 +0000 (0:00:00.092) 0:00:04.472 ********** 2026-04-05 00:23:20.670703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:23:20.670714 | orchestrator | 2026-04-05 00:23:20.670725 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:23:20.670736 | orchestrator | Sunday 05 April 2026 00:23:13 +0000 (0:00:00.087) 0:00:04.560 ********** 2026-04-05 00:23:20.670747 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:20.670758 | orchestrator | 2026-04-05 00:23:20.670768 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:23:20.670782 | orchestrator | Sunday 05 April 2026 00:23:15 +0000 (0:00:01.515) 0:00:06.076 ********** 2026-04-05 00:23:20.670795 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:20.670834 | orchestrator | 2026-04-05 00:23:20.670847 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:23:20.670861 | orchestrator | Sunday 05 April 2026 00:23:15 +0000 (0:00:00.067) 0:00:06.143 ********** 2026-04-05 00:23:20.670879 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:20.670907 | orchestrator | 2026-04-05 00:23:20.670928 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:23:20.670947 | orchestrator | Sunday 05 April 2026 00:23:16 +0000 (0:00:00.617) 0:00:06.761 ********** 2026-04-05 00:23:20.670965 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:20.670982 | orchestrator | 2026-04-05 00:23:20.671001 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:23:20.671020 | orchestrator | Sunday 05 April 2026 00:23:16 +0000 (0:00:00.096) 0:00:06.857 ********** 2026-04-05 00:23:20.671040 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:20.671058 | orchestrator | 2026-04-05 00:23:20.671076 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:23:20.671088 | orchestrator | Sunday 05 April 2026 00:23:16 +0000 (0:00:00.646) 0:00:07.504 ********** 2026-04-05 00:23:20.671099 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:20.671109 | orchestrator | 2026-04-05 00:23:20.671120 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:23:20.671132 | orchestrator | Sunday 05 April 2026 00:23:17 +0000 (0:00:01.240) 0:00:08.744 ********** 2026-04-05 00:23:20.671142 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:20.671153 | orchestrator | 2026-04-05 00:23:20.671164 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:23:20.671175 | orchestrator | Sunday 05 April 2026 00:23:19 +0000 (0:00:01.076) 0:00:09.820 ********** 2026-04-05 00:23:20.671186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-05 00:23:20.671197 | orchestrator | 2026-04-05 00:23:20.671207 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:23:20.671218 | orchestrator | Sunday 05 April 2026 00:23:19 +0000 (0:00:00.082) 0:00:09.903 ********** 2026-04-05 00:23:20.671229 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:20.671239 | orchestrator | 2026-04-05 00:23:20.671250 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:23:20.671261 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:23:20.671272 | orchestrator | 2026-04-05 00:23:20.671283 | orchestrator | 2026-04-05 00:23:20.671294 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:23:20.671304 | orchestrator | Sunday 05 April 2026 00:23:20 +0000 (0:00:01.261) 0:00:11.164 ********** 2026-04-05 00:23:20.671315 | orchestrator | =============================================================================== 2026-04-05 00:23:20.671325 | orchestrator | Gathering Facts --------------------------------------------------------- 4.11s 2026-04-05 00:23:20.671336 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.52s 2026-04-05 00:23:20.671346 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.26s 2026-04-05 00:23:20.671357 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.24s 2026-04-05 00:23:20.671666 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.08s 2026-04-05 00:23:20.671698 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.65s 2026-04-05 00:23:20.671745 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.62s 2026-04-05 00:23:20.671762 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2026-04-05 00:23:20.671778 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-05 00:23:20.671812 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-05 00:23:20.671831 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-05 00:23:20.671848 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-04-05 00:23:20.671866 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-05 00:23:20.913776 | orchestrator | + osism apply sshconfig 2026-04-05 00:23:32.375938 | orchestrator | 2026-04-05 00:23:32 | INFO  | Prepare task for execution of sshconfig. 2026-04-05 00:23:32.466860 | orchestrator | 2026-04-05 00:23:32 | INFO  | Task a052f050-976d-42ce-82f3-98e76722ba98 (sshconfig) was prepared for execution. 2026-04-05 00:23:32.466960 | orchestrator | 2026-04-05 00:23:32 | INFO  | It takes a moment until task a052f050-976d-42ce-82f3-98e76722ba98 (sshconfig) has been started and output is visible here. 2026-04-05 00:23:45.029015 | orchestrator | 2026-04-05 00:23:45.029092 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-05 00:23:45.029099 | orchestrator | 2026-04-05 00:23:45.029104 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-05 00:23:45.029108 | orchestrator | Sunday 05 April 2026 00:23:35 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-04-05 00:23:45.029113 | orchestrator | ok: [testbed-manager] 2026-04-05 00:23:45.029118 | orchestrator | 2026-04-05 00:23:45.029122 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-05 00:23:45.029126 | orchestrator | Sunday 05 April 2026 00:23:36 +0000 (0:00:00.973) 0:00:01.180 ********** 2026-04-05 00:23:45.029130 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:45.029135 | orchestrator | 2026-04-05 00:23:45.029139 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-05 00:23:45.029142 | orchestrator | Sunday 05 April 2026 00:23:37 +0000 (0:00:00.623) 0:00:01.803 ********** 2026-04-05 00:23:45.029146 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:23:45.029150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:23:45.029154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:23:45.029158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:23:45.029162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:23:45.029166 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:23:45.029169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:23:45.029173 | orchestrator | 2026-04-05 00:23:45.029177 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-05 00:23:45.029181 | orchestrator | Sunday 05 April 2026 00:23:44 +0000 (0:00:06.466) 0:00:08.270 ********** 2026-04-05 00:23:45.029185 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:45.029189 | orchestrator | 2026-04-05 00:23:45.029193 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-05 00:23:45.029197 | orchestrator | Sunday 05 April 2026 00:23:44 +0000 (0:00:00.158) 0:00:08.429 ********** 2026-04-05 00:23:45.029200 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:45.029204 | orchestrator | 2026-04-05 00:23:45.029208 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:23:45.029213 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:23:45.029218 | orchestrator | 2026-04-05 00:23:45.029222 | orchestrator | 2026-04-05 00:23:45.029226 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:23:45.029230 | orchestrator | Sunday 05 April 2026 00:23:44 +0000 (0:00:00.604) 0:00:09.033 ********** 2026-04-05 00:23:45.029234 | orchestrator | =============================================================================== 2026-04-05 00:23:45.029238 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.47s 2026-04-05 00:23:45.029258 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.97s 2026-04-05 00:23:45.029262 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.62s 2026-04-05 00:23:45.029266 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-04-05 00:23:45.029270 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.16s 2026-04-05 00:23:45.275125 | orchestrator | + osism apply known-hosts 2026-04-05 00:23:56.744324 | orchestrator | 2026-04-05 00:23:56 | INFO  | Prepare task for execution of known-hosts. 2026-04-05 00:23:56.827257 | orchestrator | 2026-04-05 00:23:56 | INFO  | Task ad3417df-c6c4-4f09-b30e-9676fdb1c9ab (known-hosts) was prepared for execution. 2026-04-05 00:23:56.827354 | orchestrator | 2026-04-05 00:23:56 | INFO  | It takes a moment until task ad3417df-c6c4-4f09-b30e-9676fdb1c9ab (known-hosts) has been started and output is visible here. 2026-04-05 00:24:13.631656 | orchestrator | 2026-04-05 00:24:13.631764 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-05 00:24:13.631781 | orchestrator | 2026-04-05 00:24:13.631793 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-05 00:24:13.631805 | orchestrator | Sunday 05 April 2026 00:24:00 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-04-05 00:24:13.631817 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:24:13.631829 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:24:13.631840 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:24:13.631862 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:24:13.631874 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:24:13.631886 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:24:13.631898 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:24:13.631928 | orchestrator | 2026-04-05 00:24:13.631952 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-05 00:24:13.631964 | orchestrator | Sunday 05 April 2026 00:24:06 +0000 (0:00:06.733) 0:00:06.943 ********** 2026-04-05 00:24:13.631975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:24:13.631988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:24:13.631999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:24:13.632010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:24:13.632021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:24:13.632033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:24:13.632045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:24:13.632056 | orchestrator | 2026-04-05 00:24:13.632067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632078 | orchestrator | Sunday 05 April 2026 00:24:07 +0000 (0:00:00.169) 0:00:07.113 ********** 2026-04-05 00:24:13.632111 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDLvnArcDz/O5bgdf4OoFCU5dihhQD1b/nTZm/wGZ8Bh) 2026-04-05 00:24:13.632127 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoMTGJ1dfIJcTffQVQcTyIjmzm2NdgNU/cFbDEIBjisEP6DcfEOay23tZFK7c0TThc1eKhyI4xDo5BJx6zCs7++6HrLLW01FJ4Z/+5WggWDAOPTB2EauuAJkVSjUiKTqHmU6HOkGdrHCmWwxGktsKbFltfNe8yulS2hJrNQlr47B9XAR1HwQpsth+G0sSA+20T3nYKf1koOlf40WQr4hCL+1Rcz+rpIVvugD0LkIrB125r+TlDzS8Wklg6o+NsUDzwMBY3zrCO1vXeZc81mwzXKy9StNggan5Ry9pqqYGs0kjwflQniV1uAxXFxt4OZfd27vbDdRWxfdkanDfR0KbJOtFmtImaj47J+2Us/z1zMshKHEEVYMTs1xQl8MML5klhH5CVp/ADgoOZQCe8dg08tILDO5bTEXLXviE1NvY+GrQDxWDwmmEPL+wBNbQw/JFdaupE1rlVSd6QXWNoLffUX8El+izw5rutfxqb++UFMUrObs8/DNmr6lqMMEKDE5U=) 2026-04-05 00:24:13.632140 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBINK4ddP/rCllqRJuJl/74xAqvWEi5kMZS12GFWud9ERj/E5BlOTaj/5olQmkObpaB7eDRZUy1rjwfbjz/Hoe8k=) 2026-04-05 00:24:13.632152 | orchestrator | 2026-04-05 00:24:13.632163 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632174 | orchestrator | Sunday 05 April 2026 00:24:08 +0000 (0:00:01.454) 0:00:08.567 ********** 2026-04-05 00:24:13.632210 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7E1Jj46/n1//75rMsI9NHe6QGJQOwW71/K3QD2AD/u1AgsJkkbJXuS9hvS3JWEpM31BhMzqPoXBykF+IZwYv4zY8ehQzVnRk+2H5Nos9rAZ+4SSD9f6fS/JhLQ1AeBmIviAiXmVNuCO0kL7ZR+vwG3y7qhjWhhD3ZcMcjSwDJs6gpLh/TWrA8eeyGtn3Jw9SpmvIMvB4+Cd145sOf+OZGjhIq6EvkfUghK5Royiq24d9v3VWc8qeViCK/q+WxuPyf1iMKQrxcBGydkoMta3JFVvT4NaxAkjnX+5gkGcGJ4oMXUDytWK9vBCwAyq28t0h6U1HqoHSCd07A53J5N1kT545mTuil+e0oU4VXT3LL+PrJlgt8YwUcDWDrfuGlCeXs+8Dzfu6/sJT3tXllP8oOFuXJlJuPWk1tD6mximfPjg4Vm0sA9o0rJQvjbJXWgbCZL7kwiAC3AhE4y8yXNx+f0cZfCydrgUu17EPCB2NwlIjQ6piojVvNbQ10ADMI17k=) 2026-04-05 00:24:13.632225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTaJIkFIvskzvXoOxI0d5UALQUxkSzP07mfPzV4gtsO) 2026-04-05 00:24:13.632237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfFXPQ0T6WOS+ne3nvD1d8SSzAO7Lg1caPwUDCXNPR7936uwbk+rTA3Q/AMKv967cHWyTM0X6QhHkoCk1T721E=) 2026-04-05 00:24:13.632249 | orchestrator | 2026-04-05 00:24:13.632261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632274 | orchestrator | Sunday 05 April 2026 00:24:09 +0000 (0:00:01.122) 0:00:09.690 ********** 2026-04-05 00:24:13.632286 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2ft3NpO3HjS2rqWlgiVQZMRB62mdo0USk3OdGKLYr9WuIGorVkU31c3O8UfkX3qO4dnwE1FcDjk7LC2qkU17Q=) 2026-04-05 00:24:13.632362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0xNiCLls7ti1MwEUoT2V5x1Yp2bLMm5r/oRnSEr8j82E6kDdWekamU7vgzAToauk5S055Gf0WlKFl4Gucz6zzfKQis30yWwLOwd5DlIoA8bZ5PE49mxqTsE10vJWFCbuAlIuPjsfb9vNGegYtbv4xkZhNBNUuMaQAf0d6M/ld/SR9DikI1PuqfosbZC3IFIt9H1Ojbk0DU4IIgA4T9Xh7TkCkNh2I7FT76jVGonEx7ju3gjK49ppRK5UV//ecDYz7HMBFwe6JyRFANwYeKbgI573myNk58k31LmCWema7pa5IOXJ0mNIzawVqcoE+MBJ6upe9zmeNsJcgHNw9Z+J1tZOm7a31ohIP9c8D51HqjpBmvHaVhCio9c9MVjN+52+S7oJRlfIGip/XJrRWaZ/ZgDtFcjaPd5xeVMBt/1kFSD8UKj/SINdIs16q65SAjwJQIpCxfGFaSyoJBvofMy33lrEiv0Y6En8hk2IAFhqs5Le2mZrnrFYkQ7w2RpPgwVM=) 2026-04-05 00:24:13.632375 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINP5ElyeZf0AcO1Ite7cELNcibAENbJNx3lvCVLQYwBm) 2026-04-05 00:24:13.632385 | orchestrator | 2026-04-05 00:24:13.632397 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632411 | orchestrator | Sunday 05 April 2026 00:24:10 +0000 (0:00:01.210) 0:00:10.900 ********** 2026-04-05 00:24:13.632423 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEO1HEtitnY1jxmU80sKLYmwOb7TfY+X6/m0mEYYL+wCe77UG426lEeoVtM8u5EfPv5imI/z0WQgifd7bws3KTQ=) 2026-04-05 00:24:13.632446 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL4KFnBdq22WmIYbVk0YXmWsg0m9tkRmlvbejzdcB6XN) 2026-04-05 00:24:13.632461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDuJ3tRMDUL3FqxZSKCB4m7+PtwlCRM2sSJpNM3GyupM+IHjbjnznpVD4YvEeNbKB4ZQ33tGI7FKqeW+LgERcLB194ehFZEm9MsZ4I6wRc6Vt+CyoXN+CRne7gLvuL+saCsIaPlNGT6Mj/hC6SjFTAJiaJlGCmdt2rzx11EVJDzHapnHr7hAtNJ6RB/Nf795LD6YtsINWyW2dTEIbLlfWzn5/nGjTdcNb7qaCREflqF5ICOzEyuJheC8CE5XxDfOqRA2fp0oOlvtpniBE/PhLqQEtBgKXZeu1aGq25UTIPDkalj08PMrdN6LfbZWQxpVmfkeDDpnueArQn77/hwzmiawKB445osmybGiCDCsfCHnTsSnON6iklmtxf7CIRLyIy7FfCDgExX7W3CaqTliDpY6aCpUyz7qbZIt9FDdmwxbPHWXDHlCk3PX3NI3+tZdIwNVUZZb3lVqpEfPnmMTuyAOd/LCZtCoCxab1eVOUpXKLGEE+x34uOcgu90vU3Sy/M=) 2026-04-05 00:24:13.632499 | orchestrator | 2026-04-05 00:24:13.632514 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632527 | orchestrator | Sunday 05 April 2026 00:24:12 +0000 (0:00:01.166) 0:00:12.067 ********** 2026-04-05 00:24:13.632540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYz/OOtWlh27Q7civi36hEcK2HM8hO5+xwcHVbKAoB1J2tr01VZnMAbOvW+51qpjDJBjgAmi/1oiz1xVg/8y2s=) 2026-04-05 00:24:13.632552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzFVTfZL6pa1pIz0SOlPAhMlu3MoEazGXKeZLHbs0B5) 2026-04-05 00:24:13.632563 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIf9KoQtVzxI9a+PhpHhBL3tTWjWHd864ZwZ8wm/rsxOw2pIAtPdcUzUWix2+WOWxERxi5fjxozCD3cC+tABIgvW2YbTAVkGkJjay8o2NpQmPhuGe2Y+QnB0ffN3ST8Gbmvb8Z/gkAdt2QYLYy+VkpSEpKnr3D4eGJaPVX9e7QG1uL/UnbaVmZvTCdoULgnCoiL0mr/XB5rWylcV8OWlrVN6JM5rquivJxms3WC9fyK7lK5OSDvbjYBTCp7hGDWlPRXmzM7+h0942QWksMz9xqQ6Rf4uHAmbboiJVcE/sa1OaFDmdAN0zJ+xw+GFLj3zalEVJmrryKNwEGgLP7LluEFveEqg1/o8qmTb7U8HB/lHcb7Z2MHjGmNQ9nwexB+vHzEeIf3aVkrpiXTzfkenYxiCCnzkMfaOGiGtM78L32NkEizgdvSNumDEwCJAuH+UhqaXvUhiQ0tnJD/iKBCHIuhTgfXxp93wGi54Fgohck2ZFzKk6pbSlmaiWSzfJBf48=) 2026-04-05 00:24:13.632575 | orchestrator | 2026-04-05 00:24:13.632586 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:13.632598 | orchestrator | Sunday 05 April 2026 00:24:13 +0000 (0:00:01.160) 0:00:13.228 ********** 2026-04-05 00:24:13.632619 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAcWQuQm3NL0NIhnlRI7YVtIj43yFFOEeSiwf4ulOSn2rFaHluS0ptwrpIhMbVIPuPBUYu8jfc6myySO414C5uE=) 2026-04-05 00:24:25.679629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhHRnUfAcxQTwXsrGuKjbnOgFc79omYW8Yp9VvwVcrBUD4iyFDqLvoYntavfwdfHMiS7HXsKqWdJdlXZS6O1XNnre/ZeZJCTY2pF2bwKFx/l/ptP+098SQuTDpnJZHMvU+pHcZ/ZlRb70iMJuVG8qPnCiZv5J6QWBFnAX5XS3IoiIlqzVYiRvh2lG/ANzuLBVw2iCj+cD1MrMDCe2Vf/wRwQq9XCZJbiPXgxelU6+0KXHnSUkKukTBAp5jAJBSzAKaKXOBGqIByOn6dz1Pb3VKePIzSuti0tLIy9qpAplbtNxTI8FkgVBpD8S6IWCf5OlhBbahoM8HpJ4k4/GsDNfSgS1S7RcBiFfAbiX4mFAYPpfzzm4nlflYnYeyuZBsyWAP6ixGzFFUzHFhH0Xxx7t+xBmV9GcAaJybgM9lvNGWdbjGWcM5TkwDGbVqahhHsY66RlxKIWH6kM9E3kqCuviL3g7EuupWUsF1+agCi6S7H+Dr/LCKQv2xnyXTThY1NpE=) 2026-04-05 00:24:25.679771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPKGXBPpc4phOOABs9uMmDEMtECZse3qeTc3rFSjBZ/p) 2026-04-05 00:24:25.679794 | orchestrator | 2026-04-05 00:24:25.679806 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:25.679819 | orchestrator | Sunday 05 April 2026 00:24:14 +0000 (0:00:01.124) 0:00:14.353 ********** 2026-04-05 00:24:25.679830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJLtr/r+YuFNVFAo3522NB6PzdEOgIjZ0eMNMTRxeIVS) 2026-04-05 00:24:25.679843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDevIY+apRzAaiJ4TRo9ghcUGA4q3HK+1i2GajWGwXYifYdaChcyYe5wGjLz4cCse9MDCTzHr2V9yMXczocryajQBsFqeilBHridFdQ7oofOMs+n8bLTIL3tuWkhXTl1mvubd3ELUFoDXBsCWIy0z1danzrrsVn4bso4/uuYG1ai2Wm2mJmHnavB5sHwh8sET7jteyua3zNyhkwOFYvT5Zw3ZhcnTKoK8g8ugIQsNiA8Apg2m7IIxlrx8ASJ0jleqYW8CLKnwVd0VrVczA9yXhOlHG61dHdRU+gwVKDV+SHBLNKkfQmE6Ru6SiG3U/rIW9QB+4EYssmq2Ymc3leKf5BMe8m9qcESUewx3O4TjthGE9jJ5ItlsPKN0Lj5YAIJIvYl9oSqmkCx/FkszxXDGFqocLJUQOCvSeuhDfpCq1k70x4j+ky/GjshpS1dBoRdAVPu+7Fa3azrdtYQBpPXmZmZX4K4ucp88a6zpaTzdd+sdzrcjoV65CUYqZk1Cyck/0=) 2026-04-05 00:24:25.679882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBNT9+yuGEITBiV9yv2sNShyp6arq6Txaq3J9kLRHG2nfFOjni+4OygrHAFqgovODsg4fsF6ijMrp+CgMHT07fc=) 2026-04-05 00:24:25.679896 | orchestrator | 2026-04-05 00:24:25.679908 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-05 00:24:25.679919 | orchestrator | Sunday 05 April 2026 00:24:15 +0000 (0:00:01.141) 0:00:15.494 ********** 2026-04-05 00:24:25.679931 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:24:25.679942 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:24:25.679952 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:24:25.679963 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:24:25.679974 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:24:25.679985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:24:25.679995 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:24:25.680006 | orchestrator | 2026-04-05 00:24:25.680017 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-05 00:24:25.680029 | orchestrator | Sunday 05 April 2026 00:24:21 +0000 (0:00:05.610) 0:00:21.105 ********** 2026-04-05 00:24:25.680058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:24:25.680072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:24:25.680083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:24:25.680094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:24:25.680106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:24:25.680118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:24:25.680132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:24:25.680145 | orchestrator | 2026-04-05 00:24:25.680176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:25.680190 | orchestrator | Sunday 05 April 2026 00:24:21 +0000 (0:00:00.218) 0:00:21.323 ********** 2026-04-05 00:24:25.680204 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBINK4ddP/rCllqRJuJl/74xAqvWEi5kMZS12GFWud9ERj/E5BlOTaj/5olQmkObpaB7eDRZUy1rjwfbjz/Hoe8k=) 2026-04-05 00:24:25.680217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoMTGJ1dfIJcTffQVQcTyIjmzm2NdgNU/cFbDEIBjisEP6DcfEOay23tZFK7c0TThc1eKhyI4xDo5BJx6zCs7++6HrLLW01FJ4Z/+5WggWDAOPTB2EauuAJkVSjUiKTqHmU6HOkGdrHCmWwxGktsKbFltfNe8yulS2hJrNQlr47B9XAR1HwQpsth+G0sSA+20T3nYKf1koOlf40WQr4hCL+1Rcz+rpIVvugD0LkIrB125r+TlDzS8Wklg6o+NsUDzwMBY3zrCO1vXeZc81mwzXKy9StNggan5Ry9pqqYGs0kjwflQniV1uAxXFxt4OZfd27vbDdRWxfdkanDfR0KbJOtFmtImaj47J+2Us/z1zMshKHEEVYMTs1xQl8MML5klhH5CVp/ADgoOZQCe8dg08tILDO5bTEXLXviE1NvY+GrQDxWDwmmEPL+wBNbQw/JFdaupE1rlVSd6QXWNoLffUX8El+izw5rutfxqb++UFMUrObs8/DNmr6lqMMEKDE5U=) 2026-04-05 00:24:25.680242 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDLvnArcDz/O5bgdf4OoFCU5dihhQD1b/nTZm/wGZ8Bh) 2026-04-05 00:24:25.680255 | orchestrator | 2026-04-05 00:24:25.680269 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:25.680281 | orchestrator | Sunday 05 April 2026 00:24:22 +0000 (0:00:01.183) 0:00:22.506 ********** 2026-04-05 00:24:25.680294 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7E1Jj46/n1//75rMsI9NHe6QGJQOwW71/K3QD2AD/u1AgsJkkbJXuS9hvS3JWEpM31BhMzqPoXBykF+IZwYv4zY8ehQzVnRk+2H5Nos9rAZ+4SSD9f6fS/JhLQ1AeBmIviAiXmVNuCO0kL7ZR+vwG3y7qhjWhhD3ZcMcjSwDJs6gpLh/TWrA8eeyGtn3Jw9SpmvIMvB4+Cd145sOf+OZGjhIq6EvkfUghK5Royiq24d9v3VWc8qeViCK/q+WxuPyf1iMKQrxcBGydkoMta3JFVvT4NaxAkjnX+5gkGcGJ4oMXUDytWK9vBCwAyq28t0h6U1HqoHSCd07A53J5N1kT545mTuil+e0oU4VXT3LL+PrJlgt8YwUcDWDrfuGlCeXs+8Dzfu6/sJT3tXllP8oOFuXJlJuPWk1tD6mximfPjg4Vm0sA9o0rJQvjbJXWgbCZL7kwiAC3AhE4y8yXNx+f0cZfCydrgUu17EPCB2NwlIjQ6piojVvNbQ10ADMI17k=) 2026-04-05 00:24:25.680307 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfFXPQ0T6WOS+ne3nvD1d8SSzAO7Lg1caPwUDCXNPR7936uwbk+rTA3Q/AMKv967cHWyTM0X6QhHkoCk1T721E=) 2026-04-05 00:24:25.680321 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTaJIkFIvskzvXoOxI0d5UALQUxkSzP07mfPzV4gtsO) 2026-04-05 00:24:25.680333 | orchestrator | 2026-04-05 00:24:25.680345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:25.680358 | orchestrator | Sunday 05 April 2026 00:24:23 +0000 (0:00:01.192) 0:00:23.699 ********** 2026-04-05 00:24:25.680371 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2ft3NpO3HjS2rqWlgiVQZMRB62mdo0USk3OdGKLYr9WuIGorVkU31c3O8UfkX3qO4dnwE1FcDjk7LC2qkU17Q=) 2026-04-05 00:24:25.680384 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0xNiCLls7ti1MwEUoT2V5x1Yp2bLMm5r/oRnSEr8j82E6kDdWekamU7vgzAToauk5S055Gf0WlKFl4Gucz6zzfKQis30yWwLOwd5DlIoA8bZ5PE49mxqTsE10vJWFCbuAlIuPjsfb9vNGegYtbv4xkZhNBNUuMaQAf0d6M/ld/SR9DikI1PuqfosbZC3IFIt9H1Ojbk0DU4IIgA4T9Xh7TkCkNh2I7FT76jVGonEx7ju3gjK49ppRK5UV//ecDYz7HMBFwe6JyRFANwYeKbgI573myNk58k31LmCWema7pa5IOXJ0mNIzawVqcoE+MBJ6upe9zmeNsJcgHNw9Z+J1tZOm7a31ohIP9c8D51HqjpBmvHaVhCio9c9MVjN+52+S7oJRlfIGip/XJrRWaZ/ZgDtFcjaPd5xeVMBt/1kFSD8UKj/SINdIs16q65SAjwJQIpCxfGFaSyoJBvofMy33lrEiv0Y6En8hk2IAFhqs5Le2mZrnrFYkQ7w2RpPgwVM=) 2026-04-05 00:24:25.680397 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINP5ElyeZf0AcO1Ite7cELNcibAENbJNx3lvCVLQYwBm) 2026-04-05 00:24:25.680409 | orchestrator | 2026-04-05 00:24:25.680421 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:25.680434 | orchestrator | Sunday 05 April 2026 00:24:24 +0000 (0:00:01.161) 0:00:24.860 ********** 2026-04-05 00:24:25.680446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEO1HEtitnY1jxmU80sKLYmwOb7TfY+X6/m0mEYYL+wCe77UG426lEeoVtM8u5EfPv5imI/z0WQgifd7bws3KTQ=) 2026-04-05 00:24:25.680470 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDuJ3tRMDUL3FqxZSKCB4m7+PtwlCRM2sSJpNM3GyupM+IHjbjnznpVD4YvEeNbKB4ZQ33tGI7FKqeW+LgERcLB194ehFZEm9MsZ4I6wRc6Vt+CyoXN+CRne7gLvuL+saCsIaPlNGT6Mj/hC6SjFTAJiaJlGCmdt2rzx11EVJDzHapnHr7hAtNJ6RB/Nf795LD6YtsINWyW2dTEIbLlfWzn5/nGjTdcNb7qaCREflqF5ICOzEyuJheC8CE5XxDfOqRA2fp0oOlvtpniBE/PhLqQEtBgKXZeu1aGq25UTIPDkalj08PMrdN6LfbZWQxpVmfkeDDpnueArQn77/hwzmiawKB445osmybGiCDCsfCHnTsSnON6iklmtxf7CIRLyIy7FfCDgExX7W3CaqTliDpY6aCpUyz7qbZIt9FDdmwxbPHWXDHlCk3PX3NI3+tZdIwNVUZZb3lVqpEfPnmMTuyAOd/LCZtCoCxab1eVOUpXKLGEE+x34uOcgu90vU3Sy/M=) 2026-04-05 00:24:30.702132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL4KFnBdq22WmIYbVk0YXmWsg0m9tkRmlvbejzdcB6XN) 2026-04-05 00:24:30.702273 | orchestrator | 2026-04-05 00:24:30.702289 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:30.702302 | orchestrator | Sunday 05 April 2026 00:24:26 +0000 (0:00:01.211) 0:00:26.072 ********** 2026-04-05 00:24:30.702314 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYz/OOtWlh27Q7civi36hEcK2HM8hO5+xwcHVbKAoB1J2tr01VZnMAbOvW+51qpjDJBjgAmi/1oiz1xVg/8y2s=) 2026-04-05 00:24:30.702348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIf9KoQtVzxI9a+PhpHhBL3tTWjWHd864ZwZ8wm/rsxOw2pIAtPdcUzUWix2+WOWxERxi5fjxozCD3cC+tABIgvW2YbTAVkGkJjay8o2NpQmPhuGe2Y+QnB0ffN3ST8Gbmvb8Z/gkAdt2QYLYy+VkpSEpKnr3D4eGJaPVX9e7QG1uL/UnbaVmZvTCdoULgnCoiL0mr/XB5rWylcV8OWlrVN6JM5rquivJxms3WC9fyK7lK5OSDvbjYBTCp7hGDWlPRXmzM7+h0942QWksMz9xqQ6Rf4uHAmbboiJVcE/sa1OaFDmdAN0zJ+xw+GFLj3zalEVJmrryKNwEGgLP7LluEFveEqg1/o8qmTb7U8HB/lHcb7Z2MHjGmNQ9nwexB+vHzEeIf3aVkrpiXTzfkenYxiCCnzkMfaOGiGtM78L32NkEizgdvSNumDEwCJAuH+UhqaXvUhiQ0tnJD/iKBCHIuhTgfXxp93wGi54Fgohck2ZFzKk6pbSlmaiWSzfJBf48=) 2026-04-05 00:24:30.702362 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzFVTfZL6pa1pIz0SOlPAhMlu3MoEazGXKeZLHbs0B5) 2026-04-05 00:24:30.702374 | orchestrator | 2026-04-05 00:24:30.702392 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:30.702411 | orchestrator | Sunday 05 April 2026 00:24:27 +0000 (0:00:01.153) 0:00:27.226 ********** 2026-04-05 00:24:30.702432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhHRnUfAcxQTwXsrGuKjbnOgFc79omYW8Yp9VvwVcrBUD4iyFDqLvoYntavfwdfHMiS7HXsKqWdJdlXZS6O1XNnre/ZeZJCTY2pF2bwKFx/l/ptP+098SQuTDpnJZHMvU+pHcZ/ZlRb70iMJuVG8qPnCiZv5J6QWBFnAX5XS3IoiIlqzVYiRvh2lG/ANzuLBVw2iCj+cD1MrMDCe2Vf/wRwQq9XCZJbiPXgxelU6+0KXHnSUkKukTBAp5jAJBSzAKaKXOBGqIByOn6dz1Pb3VKePIzSuti0tLIy9qpAplbtNxTI8FkgVBpD8S6IWCf5OlhBbahoM8HpJ4k4/GsDNfSgS1S7RcBiFfAbiX4mFAYPpfzzm4nlflYnYeyuZBsyWAP6ixGzFFUzHFhH0Xxx7t+xBmV9GcAaJybgM9lvNGWdbjGWcM5TkwDGbVqahhHsY66RlxKIWH6kM9E3kqCuviL3g7EuupWUsF1+agCi6S7H+Dr/LCKQv2xnyXTThY1NpE=) 2026-04-05 00:24:30.702452 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAcWQuQm3NL0NIhnlRI7YVtIj43yFFOEeSiwf4ulOSn2rFaHluS0ptwrpIhMbVIPuPBUYu8jfc6myySO414C5uE=) 2026-04-05 00:24:30.702512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPKGXBPpc4phOOABs9uMmDEMtECZse3qeTc3rFSjBZ/p) 2026-04-05 00:24:30.702534 | orchestrator | 2026-04-05 00:24:30.702554 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:24:30.702572 | orchestrator | Sunday 05 April 2026 00:24:28 +0000 (0:00:01.166) 0:00:28.392 ********** 2026-04-05 00:24:30.702592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDevIY+apRzAaiJ4TRo9ghcUGA4q3HK+1i2GajWGwXYifYdaChcyYe5wGjLz4cCse9MDCTzHr2V9yMXczocryajQBsFqeilBHridFdQ7oofOMs+n8bLTIL3tuWkhXTl1mvubd3ELUFoDXBsCWIy0z1danzrrsVn4bso4/uuYG1ai2Wm2mJmHnavB5sHwh8sET7jteyua3zNyhkwOFYvT5Zw3ZhcnTKoK8g8ugIQsNiA8Apg2m7IIxlrx8ASJ0jleqYW8CLKnwVd0VrVczA9yXhOlHG61dHdRU+gwVKDV+SHBLNKkfQmE6Ru6SiG3U/rIW9QB+4EYssmq2Ymc3leKf5BMe8m9qcESUewx3O4TjthGE9jJ5ItlsPKN0Lj5YAIJIvYl9oSqmkCx/FkszxXDGFqocLJUQOCvSeuhDfpCq1k70x4j+ky/GjshpS1dBoRdAVPu+7Fa3azrdtYQBpPXmZmZX4K4ucp88a6zpaTzdd+sdzrcjoV65CUYqZk1Cyck/0=) 2026-04-05 00:24:30.702643 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJLtr/r+YuFNVFAo3522NB6PzdEOgIjZ0eMNMTRxeIVS) 2026-04-05 00:24:30.702659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBNT9+yuGEITBiV9yv2sNShyp6arq6Txaq3J9kLRHG2nfFOjni+4OygrHAFqgovODsg4fsF6ijMrp+CgMHT07fc=) 2026-04-05 00:24:30.702673 | orchestrator | 2026-04-05 00:24:30.702686 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-05 00:24:30.702699 | orchestrator | Sunday 05 April 2026 00:24:29 +0000 (0:00:01.163) 0:00:29.555 ********** 2026-04-05 00:24:30.702713 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:24:30.702726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:24:30.702738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:24:30.702751 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:24:30.702764 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:24:30.702796 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:24:30.702811 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:24:30.702824 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:30.702837 | orchestrator | 2026-04-05 00:24:30.702849 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-05 00:24:30.702862 | orchestrator | Sunday 05 April 2026 00:24:29 +0000 (0:00:00.190) 0:00:29.746 ********** 2026-04-05 00:24:30.702874 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:30.702886 | orchestrator | 2026-04-05 00:24:30.702899 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-05 00:24:30.702911 | orchestrator | Sunday 05 April 2026 00:24:29 +0000 (0:00:00.052) 0:00:29.798 ********** 2026-04-05 00:24:30.702924 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:24:30.702935 | orchestrator | 2026-04-05 00:24:30.702949 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-05 00:24:30.702961 | orchestrator | Sunday 05 April 2026 00:24:29 +0000 (0:00:00.062) 0:00:29.861 ********** 2026-04-05 00:24:30.702972 | orchestrator | changed: [testbed-manager] 2026-04-05 00:24:30.702983 | orchestrator | 2026-04-05 00:24:30.702994 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:24:30.703005 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:24:30.703017 | orchestrator | 2026-04-05 00:24:30.703027 | orchestrator | 2026-04-05 00:24:30.703038 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:24:30.703049 | orchestrator | Sunday 05 April 2026 00:24:30 +0000 (0:00:00.538) 0:00:30.399 ********** 2026-04-05 00:24:30.703059 | orchestrator | =============================================================================== 2026-04-05 00:24:30.703070 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.73s 2026-04-05 00:24:30.703081 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.61s 2026-04-05 00:24:30.703092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.45s 2026-04-05 00:24:30.703102 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-05 00:24:30.703113 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-05 00:24:30.703124 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-05 00:24:30.703134 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-05 00:24:30.703145 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-05 00:24:30.703155 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-05 00:24:30.703174 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-05 00:24:30.703185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-05 00:24:30.703195 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-05 00:24:30.703206 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-05 00:24:30.703217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-05 00:24:30.703227 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 00:24:30.703238 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 00:24:30.703249 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.54s 2026-04-05 00:24:30.703259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2026-04-05 00:24:30.703271 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-05 00:24:30.703282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-05 00:24:30.938589 | orchestrator | + osism apply squid 2026-04-05 00:24:42.252744 | orchestrator | 2026-04-05 00:24:42 | INFO  | Prepare task for execution of squid. 2026-04-05 00:24:42.342443 | orchestrator | 2026-04-05 00:24:42 | INFO  | Task 465fed42-7825-4da5-b93d-ec243b9b3c85 (squid) was prepared for execution. 2026-04-05 00:24:42.342606 | orchestrator | 2026-04-05 00:24:42 | INFO  | It takes a moment until task 465fed42-7825-4da5-b93d-ec243b9b3c85 (squid) has been started and output is visible here. 2026-04-05 00:26:45.127711 | orchestrator | 2026-04-05 00:26:45.127828 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-05 00:26:45.127844 | orchestrator | 2026-04-05 00:26:45.127856 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-05 00:26:45.127868 | orchestrator | Sunday 05 April 2026 00:24:45 +0000 (0:00:00.222) 0:00:00.222 ********** 2026-04-05 00:26:45.127879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:26:45.127891 | orchestrator | 2026-04-05 00:26:45.127902 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-05 00:26:45.127913 | orchestrator | Sunday 05 April 2026 00:24:45 +0000 (0:00:00.079) 0:00:00.301 ********** 2026-04-05 00:26:45.127923 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.127935 | orchestrator | 2026-04-05 00:26:45.127946 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-05 00:26:45.127957 | orchestrator | Sunday 05 April 2026 00:24:48 +0000 (0:00:03.006) 0:00:03.308 ********** 2026-04-05 00:26:45.127968 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-05 00:26:45.127999 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-05 00:26:45.128011 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-05 00:26:45.128022 | orchestrator | 2026-04-05 00:26:45.128033 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-05 00:26:45.128044 | orchestrator | Sunday 05 April 2026 00:24:50 +0000 (0:00:01.409) 0:00:04.718 ********** 2026-04-05 00:26:45.128055 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-05 00:26:45.128066 | orchestrator | 2026-04-05 00:26:45.128077 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-05 00:26:45.128088 | orchestrator | Sunday 05 April 2026 00:24:51 +0000 (0:00:01.168) 0:00:05.886 ********** 2026-04-05 00:26:45.128099 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.128110 | orchestrator | 2026-04-05 00:26:45.128121 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-05 00:26:45.128132 | orchestrator | Sunday 05 April 2026 00:24:51 +0000 (0:00:00.390) 0:00:06.277 ********** 2026-04-05 00:26:45.128168 | orchestrator | changed: [testbed-manager] 2026-04-05 00:26:45.128184 | orchestrator | 2026-04-05 00:26:45.128196 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-05 00:26:45.128207 | orchestrator | Sunday 05 April 2026 00:24:52 +0000 (0:00:01.020) 0:00:07.298 ********** 2026-04-05 00:26:45.128218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-05 00:26:45.128229 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.128240 | orchestrator | 2026-04-05 00:26:45.128251 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-05 00:26:45.128264 | orchestrator | Sunday 05 April 2026 00:25:31 +0000 (0:00:39.034) 0:00:46.333 ********** 2026-04-05 00:26:45.128277 | orchestrator | changed: [testbed-manager] 2026-04-05 00:26:45.128289 | orchestrator | 2026-04-05 00:26:45.128307 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-05 00:26:45.128321 | orchestrator | Sunday 05 April 2026 00:25:44 +0000 (0:00:12.158) 0:00:58.491 ********** 2026-04-05 00:26:45.128332 | orchestrator | Pausing for 60 seconds 2026-04-05 00:26:45.128343 | orchestrator | changed: [testbed-manager] 2026-04-05 00:26:45.128353 | orchestrator | 2026-04-05 00:26:45.128364 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-05 00:26:45.128375 | orchestrator | Sunday 05 April 2026 00:26:44 +0000 (0:01:00.095) 0:01:58.586 ********** 2026-04-05 00:26:45.128385 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.128436 | orchestrator | 2026-04-05 00:26:45.128447 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-05 00:26:45.128458 | orchestrator | Sunday 05 April 2026 00:26:44 +0000 (0:00:00.076) 0:01:58.663 ********** 2026-04-05 00:26:45.128468 | orchestrator | changed: [testbed-manager] 2026-04-05 00:26:45.128479 | orchestrator | 2026-04-05 00:26:45.128490 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:26:45.128500 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:26:45.128511 | orchestrator | 2026-04-05 00:26:45.128522 | orchestrator | 2026-04-05 00:26:45.128533 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:26:45.128543 | orchestrator | Sunday 05 April 2026 00:26:44 +0000 (0:00:00.657) 0:01:59.321 ********** 2026-04-05 00:26:45.128554 | orchestrator | =============================================================================== 2026-04-05 00:26:45.128565 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-04-05 00:26:45.128575 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 39.03s 2026-04-05 00:26:45.128586 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.16s 2026-04-05 00:26:45.128596 | orchestrator | osism.services.squid : Install required packages ------------------------ 3.01s 2026-04-05 00:26:45.128607 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.41s 2026-04-05 00:26:45.128618 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.17s 2026-04-05 00:26:45.128628 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.02s 2026-04-05 00:26:45.128639 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-04-05 00:26:45.128650 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-04-05 00:26:45.128660 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-05 00:26:45.128671 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-04-05 00:26:45.383629 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-05 00:26:45.383721 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-05 00:26:45.465050 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:26:45.465152 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/ 2026-04-05 00:26:45.470987 | orchestrator | + set -e 2026-04-05 00:26:45.471064 | orchestrator | + NAMESPACE=kolla/release/ 2026-04-05 00:26:45.471106 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 00:26:45.478847 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-05 00:26:45.546735 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-05 00:26:45.548011 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-05 00:26:56.918408 | orchestrator | 2026-04-05 00:26:56 | INFO  | Prepare task for execution of operator. 2026-04-05 00:26:57.002518 | orchestrator | 2026-04-05 00:26:57 | INFO  | Task 6b7d71e1-93da-4955-99cc-15a1251f914a (operator) was prepared for execution. 2026-04-05 00:26:57.002616 | orchestrator | 2026-04-05 00:26:57 | INFO  | It takes a moment until task 6b7d71e1-93da-4955-99cc-15a1251f914a (operator) has been started and output is visible here. 2026-04-05 00:27:12.963931 | orchestrator | 2026-04-05 00:27:12.964035 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-05 00:27:12.964051 | orchestrator | 2026-04-05 00:27:12.964064 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:27:12.964076 | orchestrator | Sunday 05 April 2026 00:27:00 +0000 (0:00:00.200) 0:00:00.200 ********** 2026-04-05 00:27:12.964087 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.964099 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.964110 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.964120 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.964131 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.964141 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.964152 | orchestrator | 2026-04-05 00:27:12.964163 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-05 00:27:12.964174 | orchestrator | Sunday 05 April 2026 00:27:03 +0000 (0:00:03.419) 0:00:03.620 ********** 2026-04-05 00:27:12.964184 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.964195 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.964205 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.964215 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.964226 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.964236 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.964247 | orchestrator | 2026-04-05 00:27:12.964257 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-05 00:27:12.964268 | orchestrator | 2026-04-05 00:27:12.964278 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:27:12.964289 | orchestrator | Sunday 05 April 2026 00:27:04 +0000 (0:00:00.895) 0:00:04.516 ********** 2026-04-05 00:27:12.964300 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.964311 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.964322 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.964332 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.964342 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.964353 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.964439 | orchestrator | 2026-04-05 00:27:12.964451 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:27:12.964468 | orchestrator | Sunday 05 April 2026 00:27:04 +0000 (0:00:00.180) 0:00:04.697 ********** 2026-04-05 00:27:12.964487 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.964506 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.964524 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.964542 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.964560 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.964576 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.964595 | orchestrator | 2026-04-05 00:27:12.964614 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:27:12.964633 | orchestrator | Sunday 05 April 2026 00:27:05 +0000 (0:00:00.200) 0:00:04.897 ********** 2026-04-05 00:27:12.964653 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.964674 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.964694 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.964714 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.964756 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.964769 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.964782 | orchestrator | 2026-04-05 00:27:12.964794 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:27:12.964807 | orchestrator | Sunday 05 April 2026 00:27:05 +0000 (0:00:00.743) 0:00:05.641 ********** 2026-04-05 00:27:12.964818 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.964829 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.964839 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.964850 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.964860 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.964871 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.964881 | orchestrator | 2026-04-05 00:27:12.964892 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:27:12.964902 | orchestrator | Sunday 05 April 2026 00:27:06 +0000 (0:00:00.916) 0:00:06.557 ********** 2026-04-05 00:27:12.964913 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-05 00:27:12.964924 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-05 00:27:12.964935 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-05 00:27:12.964945 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-05 00:27:12.964956 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-05 00:27:12.964966 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-05 00:27:12.964977 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-05 00:27:12.964987 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-05 00:27:12.964998 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-05 00:27:12.965008 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-05 00:27:12.965019 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-05 00:27:12.965029 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-05 00:27:12.965040 | orchestrator | 2026-04-05 00:27:12.965050 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:27:12.965061 | orchestrator | Sunday 05 April 2026 00:27:07 +0000 (0:00:01.219) 0:00:07.777 ********** 2026-04-05 00:27:12.965072 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.965083 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.965093 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.965104 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.965114 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.965125 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.965135 | orchestrator | 2026-04-05 00:27:12.965155 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:27:12.965173 | orchestrator | Sunday 05 April 2026 00:27:09 +0000 (0:00:01.372) 0:00:09.149 ********** 2026-04-05 00:27:12.965190 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965206 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965223 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965240 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965257 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965321 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:27:12.965344 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965388 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965402 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965413 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965424 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965435 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-05 00:27:12.965445 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965468 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-05 00:27:12.965479 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-05 00:27:12.965490 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-05 00:27:12.965501 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965511 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965522 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965532 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965543 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:27:12.965553 | orchestrator | 2026-04-05 00:27:12.965564 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:27:12.965576 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:00:01.327) 0:00:10.477 ********** 2026-04-05 00:27:12.965592 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.965603 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.965614 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.965625 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.965635 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.965646 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.965656 | orchestrator | 2026-04-05 00:27:12.965667 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:27:12.965678 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:00:00.180) 0:00:10.657 ********** 2026-04-05 00:27:12.965688 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.965699 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.965709 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.965720 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.965730 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.965741 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.965751 | orchestrator | 2026-04-05 00:27:12.965762 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:27:12.965773 | orchestrator | Sunday 05 April 2026 00:27:11 +0000 (0:00:00.194) 0:00:10.852 ********** 2026-04-05 00:27:12.965784 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.965794 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.965805 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.965815 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.965826 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.965836 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.965847 | orchestrator | 2026-04-05 00:27:12.965857 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:27:12.965868 | orchestrator | Sunday 05 April 2026 00:27:11 +0000 (0:00:00.670) 0:00:11.523 ********** 2026-04-05 00:27:12.965878 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.965889 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.965900 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.965910 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.965920 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.965931 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.965941 | orchestrator | 2026-04-05 00:27:12.965952 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:27:12.965963 | orchestrator | Sunday 05 April 2026 00:27:11 +0000 (0:00:00.221) 0:00:11.744 ********** 2026-04-05 00:27:12.965974 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 00:27:12.965984 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:27:12.965995 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.966006 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.966077 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:27:12.966098 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.966109 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:27:12.966120 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.966130 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:27:12.966141 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.966152 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 00:27:12.966162 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.966173 | orchestrator | 2026-04-05 00:27:12.966184 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:27:12.966194 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.758) 0:00:12.503 ********** 2026-04-05 00:27:12.966205 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.966216 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.966226 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.966237 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.966248 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.966258 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.966269 | orchestrator | 2026-04-05 00:27:12.966280 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:27:12.966290 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.153) 0:00:12.657 ********** 2026-04-05 00:27:12.966301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.966312 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.966322 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.966333 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.966353 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:14.376680 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:14.376811 | orchestrator | 2026-04-05 00:27:14.376838 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:27:14.376859 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.180) 0:00:12.837 ********** 2026-04-05 00:27:14.376878 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:14.376897 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:14.376914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:14.376932 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:14.376949 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:14.376967 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:14.376986 | orchestrator | 2026-04-05 00:27:14.377005 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:27:14.377023 | orchestrator | Sunday 05 April 2026 00:27:13 +0000 (0:00:00.174) 0:00:13.012 ********** 2026-04-05 00:27:14.377042 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:14.377059 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:14.377077 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:14.377095 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:14.377114 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:14.377131 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:14.377150 | orchestrator | 2026-04-05 00:27:14.377169 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:27:14.377188 | orchestrator | Sunday 05 April 2026 00:27:13 +0000 (0:00:00.694) 0:00:13.706 ********** 2026-04-05 00:27:14.377208 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:14.377228 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:14.377248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:14.377268 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:14.377287 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:14.377306 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:14.377325 | orchestrator | 2026-04-05 00:27:14.377344 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:27:14.377403 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377459 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377480 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377499 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377519 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377539 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:27:14.377556 | orchestrator | 2026-04-05 00:27:14.377576 | orchestrator | 2026-04-05 00:27:14.377593 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:27:14.377611 | orchestrator | Sunday 05 April 2026 00:27:14 +0000 (0:00:00.279) 0:00:13.986 ********** 2026-04-05 00:27:14.377629 | orchestrator | =============================================================================== 2026-04-05 00:27:14.377646 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2026-04-05 00:27:14.377664 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2026-04-05 00:27:14.377682 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2026-04-05 00:27:14.377701 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-04-05 00:27:14.377718 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.92s 2026-04-05 00:27:14.377735 | orchestrator | Do not require tty for all users ---------------------------------------- 0.90s 2026-04-05 00:27:14.377753 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-04-05 00:27:14.377770 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.74s 2026-04-05 00:27:14.377788 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-04-05 00:27:14.377806 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.67s 2026-04-05 00:27:14.377824 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2026-04-05 00:27:14.377842 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-04-05 00:27:14.377860 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-04-05 00:27:14.377879 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-04-05 00:27:14.377896 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-04-05 00:27:14.377914 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-04-05 00:27:14.377934 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-04-05 00:27:14.377954 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-04-05 00:27:14.377975 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-05 00:27:14.618743 | orchestrator | + osism apply --environment custom facts 2026-04-05 00:27:15.938147 | orchestrator | 2026-04-05 00:27:15 | INFO  | Trying to run play facts in environment custom 2026-04-05 00:27:26.040434 | orchestrator | 2026-04-05 00:27:26 | INFO  | Prepare task for execution of facts. 2026-04-05 00:27:26.135354 | orchestrator | 2026-04-05 00:27:26 | INFO  | Task e69b74a3-f00c-41d8-9239-7246eb7f581b (facts) was prepared for execution. 2026-04-05 00:27:26.135445 | orchestrator | 2026-04-05 00:27:26 | INFO  | It takes a moment until task e69b74a3-f00c-41d8-9239-7246eb7f581b (facts) has been started and output is visible here. 2026-04-05 00:28:08.820257 | orchestrator | 2026-04-05 00:28:08.820421 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-05 00:28:08.820435 | orchestrator | 2026-04-05 00:28:08.820443 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:28:08.820452 | orchestrator | Sunday 05 April 2026 00:27:29 +0000 (0:00:00.140) 0:00:00.140 ********** 2026-04-05 00:28:08.820460 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:08.820508 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:08.820516 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.820524 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.820531 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:08.820539 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.820546 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:08.820555 | orchestrator | 2026-04-05 00:28:08.820562 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-05 00:28:08.820570 | orchestrator | Sunday 05 April 2026 00:27:30 +0000 (0:00:01.470) 0:00:01.610 ********** 2026-04-05 00:28:08.820577 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:08.820584 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:08.820596 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.820603 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.820610 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.820617 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:08.820625 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:08.820632 | orchestrator | 2026-04-05 00:28:08.820639 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-05 00:28:08.820647 | orchestrator | 2026-04-05 00:28:08.820654 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:28:08.820661 | orchestrator | Sunday 05 April 2026 00:27:32 +0000 (0:00:01.393) 0:00:03.004 ********** 2026-04-05 00:28:08.820669 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.820676 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.820683 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.820690 | orchestrator | 2026-04-05 00:28:08.820698 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:28:08.820706 | orchestrator | Sunday 05 April 2026 00:27:32 +0000 (0:00:00.110) 0:00:03.115 ********** 2026-04-05 00:28:08.820713 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.820720 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.820727 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.820735 | orchestrator | 2026-04-05 00:28:08.820742 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:28:08.820750 | orchestrator | Sunday 05 April 2026 00:27:32 +0000 (0:00:00.215) 0:00:03.330 ********** 2026-04-05 00:28:08.820757 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.820764 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.820771 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.820779 | orchestrator | 2026-04-05 00:28:08.820786 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:28:08.820793 | orchestrator | Sunday 05 April 2026 00:27:32 +0000 (0:00:00.279) 0:00:03.610 ********** 2026-04-05 00:28:08.820802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:08.820811 | orchestrator | 2026-04-05 00:28:08.820820 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:28:08.820829 | orchestrator | Sunday 05 April 2026 00:27:33 +0000 (0:00:00.145) 0:00:03.755 ********** 2026-04-05 00:28:08.820838 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.820846 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.820855 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.820863 | orchestrator | 2026-04-05 00:28:08.820872 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:28:08.820900 | orchestrator | Sunday 05 April 2026 00:27:33 +0000 (0:00:00.444) 0:00:04.200 ********** 2026-04-05 00:28:08.820909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:08.820918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:08.820926 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:08.820935 | orchestrator | 2026-04-05 00:28:08.820944 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:28:08.820952 | orchestrator | Sunday 05 April 2026 00:27:33 +0000 (0:00:00.135) 0:00:04.335 ********** 2026-04-05 00:28:08.820961 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.820969 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.820978 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.820987 | orchestrator | 2026-04-05 00:28:08.820996 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:28:08.821004 | orchestrator | Sunday 05 April 2026 00:27:34 +0000 (0:00:01.199) 0:00:05.535 ********** 2026-04-05 00:28:08.821013 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.821021 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.821030 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.821039 | orchestrator | 2026-04-05 00:28:08.821048 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:28:08.821057 | orchestrator | Sunday 05 April 2026 00:27:35 +0000 (0:00:00.475) 0:00:06.010 ********** 2026-04-05 00:28:08.821065 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.821072 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.821079 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.821087 | orchestrator | 2026-04-05 00:28:08.821094 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:28:08.821101 | orchestrator | Sunday 05 April 2026 00:27:36 +0000 (0:00:01.057) 0:00:07.067 ********** 2026-04-05 00:28:08.821108 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.821116 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.821123 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.821130 | orchestrator | 2026-04-05 00:28:08.821137 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-05 00:28:08.821144 | orchestrator | Sunday 05 April 2026 00:27:52 +0000 (0:00:15.823) 0:00:22.891 ********** 2026-04-05 00:28:08.821152 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:08.821159 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:08.821166 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:08.821173 | orchestrator | 2026-04-05 00:28:08.821181 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-05 00:28:08.821202 | orchestrator | Sunday 05 April 2026 00:27:52 +0000 (0:00:00.111) 0:00:23.003 ********** 2026-04-05 00:28:08.821210 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:08.821231 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:08.821239 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:08.821246 | orchestrator | 2026-04-05 00:28:08.821253 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:28:08.821260 | orchestrator | Sunday 05 April 2026 00:28:00 +0000 (0:00:07.950) 0:00:30.953 ********** 2026-04-05 00:28:08.821268 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.821275 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.821282 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.821289 | orchestrator | 2026-04-05 00:28:08.821316 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:28:08.821323 | orchestrator | Sunday 05 April 2026 00:28:00 +0000 (0:00:00.399) 0:00:31.352 ********** 2026-04-05 00:28:08.821331 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-05 00:28:08.821338 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-05 00:28:08.821349 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-05 00:28:08.821357 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:08.821370 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:08.821378 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-05 00:28:08.821385 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:08.821392 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:08.821399 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-05 00:28:08.821407 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:08.821414 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:08.821421 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:28:08.821428 | orchestrator | 2026-04-05 00:28:08.821435 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:28:08.821443 | orchestrator | Sunday 05 April 2026 00:28:04 +0000 (0:00:03.396) 0:00:34.749 ********** 2026-04-05 00:28:08.821450 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.821457 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.821464 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.821471 | orchestrator | 2026-04-05 00:28:08.821479 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:28:08.821486 | orchestrator | 2026-04-05 00:28:08.821493 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:28:08.821501 | orchestrator | Sunday 05 April 2026 00:28:05 +0000 (0:00:01.122) 0:00:35.871 ********** 2026-04-05 00:28:08.821508 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:08.821515 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:08.821522 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:08.821530 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:08.821537 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:08.821544 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:08.821551 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:08.821558 | orchestrator | 2026-04-05 00:28:08.821565 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:28:08.821573 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:08.821581 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:08.821590 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:08.821597 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:28:08.821605 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:08.821612 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:08.821620 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:28:08.821627 | orchestrator | 2026-04-05 00:28:08.821634 | orchestrator | 2026-04-05 00:28:08.821641 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:28:08.821649 | orchestrator | Sunday 05 April 2026 00:28:08 +0000 (0:00:03.636) 0:00:39.508 ********** 2026-04-05 00:28:08.821656 | orchestrator | =============================================================================== 2026-04-05 00:28:08.821663 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.82s 2026-04-05 00:28:08.821670 | orchestrator | Install required packages (Debian) -------------------------------------- 7.95s 2026-04-05 00:28:08.821683 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-04-05 00:28:08.821690 | orchestrator | Copy fact files --------------------------------------------------------- 3.40s 2026-04-05 00:28:08.821697 | orchestrator | Create custom facts directory ------------------------------------------- 1.47s 2026-04-05 00:28:08.821704 | orchestrator | Copy fact file ---------------------------------------------------------- 1.39s 2026-04-05 00:28:08.821716 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.20s 2026-04-05 00:28:09.038620 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.12s 2026-04-05 00:28:09.038697 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-04-05 00:28:09.038704 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-04-05 00:28:09.038711 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-04-05 00:28:09.038717 | orchestrator | Create custom facts directory ------------------------------------------- 0.40s 2026-04-05 00:28:09.038723 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.28s 2026-04-05 00:28:09.038729 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-04-05 00:28:09.038734 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-04-05 00:28:09.038741 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-05 00:28:09.038746 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-04-05 00:28:09.038752 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-04-05 00:28:09.397437 | orchestrator | + osism apply bootstrap 2026-04-05 00:28:20.896164 | orchestrator | 2026-04-05 00:28:20 | INFO  | Prepare task for execution of bootstrap. 2026-04-05 00:28:20.995667 | orchestrator | 2026-04-05 00:28:20 | INFO  | Task 57bf6bfe-9620-47b5-9054-fdf02cbead10 (bootstrap) was prepared for execution. 2026-04-05 00:28:20.995765 | orchestrator | 2026-04-05 00:28:20 | INFO  | It takes a moment until task 57bf6bfe-9620-47b5-9054-fdf02cbead10 (bootstrap) has been started and output is visible here. 2026-04-05 00:28:38.000359 | orchestrator | 2026-04-05 00:28:38.000469 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:28:38.000488 | orchestrator | 2026-04-05 00:28:38.000501 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:28:38.000514 | orchestrator | Sunday 05 April 2026 00:28:24 +0000 (0:00:00.206) 0:00:00.206 ********** 2026-04-05 00:28:38.000528 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:38.000542 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:38.000554 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:38.000567 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:38.000578 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:38.000590 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:38.000602 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:38.000615 | orchestrator | 2026-04-05 00:28:38.000627 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:28:38.000639 | orchestrator | 2026-04-05 00:28:38.000649 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:28:38.000657 | orchestrator | Sunday 05 April 2026 00:28:25 +0000 (0:00:00.348) 0:00:00.555 ********** 2026-04-05 00:28:38.000664 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:38.000672 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:38.000679 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:38.000686 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:38.000705 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:38.000713 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:38.000720 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:38.000729 | orchestrator | 2026-04-05 00:28:38.000738 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-05 00:28:38.000764 | orchestrator | 2026-04-05 00:28:38.000774 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:28:38.000782 | orchestrator | Sunday 05 April 2026 00:28:30 +0000 (0:00:04.687) 0:00:05.243 ********** 2026-04-05 00:28:38.000792 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:28:38.000801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-05 00:28:38.000809 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:28:38.000818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:28:38.000826 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:28:38.000835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:28:38.000843 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:28:38.000851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:28:38.000861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-05 00:28:38.000869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:28:38.000880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-05 00:28:38.000894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 00:28:38.000910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 00:28:38.000924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:28:38.000937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 00:28:38.000948 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:28:38.000958 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 00:28:38.000968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 00:28:38.000978 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 00:28:38.000989 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 00:28:38.000999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 00:28:38.001010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-05 00:28:38.001020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 00:28:38.001031 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 00:28:38.001042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 00:28:38.001052 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:38.001062 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:38.001073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 00:28:38.001083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-05 00:28:38.001093 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 00:28:38.001103 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:38.001114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 00:28:38.001124 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 00:28:38.001134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:28:38.001144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 00:28:38.001153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:28:38.001167 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 00:28:38.001178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 00:28:38.001187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-05 00:28:38.001197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:28:38.001208 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:38.001218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 00:28:38.001229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 00:28:38.001246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 00:28:38.001280 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 00:28:38.001291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 00:28:38.001317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 00:28:38.001326 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 00:28:38.001335 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:38.001343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 00:28:38.001352 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 00:28:38.001360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 00:28:38.001369 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:38.001378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 00:28:38.001386 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 00:28:38.001395 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:38.001403 | orchestrator | 2026-04-05 00:28:38.001412 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-05 00:28:38.001421 | orchestrator | 2026-04-05 00:28:38.001429 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-05 00:28:38.001438 | orchestrator | Sunday 05 April 2026 00:28:30 +0000 (0:00:00.590) 0:00:05.834 ********** 2026-04-05 00:28:38.001447 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:38.001456 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:38.001464 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:38.001473 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:38.001481 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:38.001490 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:38.001498 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:38.001507 | orchestrator | 2026-04-05 00:28:38.001516 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-05 00:28:38.001524 | orchestrator | Sunday 05 April 2026 00:28:32 +0000 (0:00:01.401) 0:00:07.235 ********** 2026-04-05 00:28:38.001533 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:38.001541 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:38.001549 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:38.001558 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:38.001566 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:38.001575 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:38.001583 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:38.001592 | orchestrator | 2026-04-05 00:28:38.001602 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-05 00:28:38.001618 | orchestrator | Sunday 05 April 2026 00:28:33 +0000 (0:00:01.334) 0:00:08.570 ********** 2026-04-05 00:28:38.001633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:38.001649 | orchestrator | 2026-04-05 00:28:38.001664 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-05 00:28:38.001678 | orchestrator | Sunday 05 April 2026 00:28:33 +0000 (0:00:00.356) 0:00:08.926 ********** 2026-04-05 00:28:38.001694 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:38.001708 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:38.001722 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:38.001736 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:38.001751 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:38.001766 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:38.001782 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:38.001796 | orchestrator | 2026-04-05 00:28:38.001811 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-05 00:28:38.001827 | orchestrator | Sunday 05 April 2026 00:28:35 +0000 (0:00:01.694) 0:00:10.621 ********** 2026-04-05 00:28:38.001851 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:38.001865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:38.001876 | orchestrator | 2026-04-05 00:28:38.001885 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-05 00:28:38.001894 | orchestrator | Sunday 05 April 2026 00:28:35 +0000 (0:00:00.300) 0:00:10.921 ********** 2026-04-05 00:28:38.001902 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:38.001911 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:38.001920 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:38.001928 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:38.001936 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:38.001945 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:38.001953 | orchestrator | 2026-04-05 00:28:38.001962 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-05 00:28:38.001970 | orchestrator | Sunday 05 April 2026 00:28:36 +0000 (0:00:01.029) 0:00:11.951 ********** 2026-04-05 00:28:38.001979 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:38.001987 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:38.001996 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:38.002004 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:38.002013 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:38.002085 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:38.002094 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:38.002103 | orchestrator | 2026-04-05 00:28:38.002127 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-05 00:28:38.002136 | orchestrator | Sunday 05 April 2026 00:28:37 +0000 (0:00:00.641) 0:00:12.593 ********** 2026-04-05 00:28:38.002145 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:38.002153 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:38.002195 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:38.002204 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:38.002212 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:38.002221 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:38.002229 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:38.002238 | orchestrator | 2026-04-05 00:28:38.002247 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:28:38.002279 | orchestrator | Sunday 05 April 2026 00:28:37 +0000 (0:00:00.485) 0:00:13.078 ********** 2026-04-05 00:28:38.002296 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:38.002312 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:38.002333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:50.619228 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:50.619370 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:50.619384 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:50.619391 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:50.619398 | orchestrator | 2026-04-05 00:28:50.619407 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:28:50.619416 | orchestrator | Sunday 05 April 2026 00:28:38 +0000 (0:00:00.262) 0:00:13.341 ********** 2026-04-05 00:28:50.619426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:50.619446 | orchestrator | 2026-04-05 00:28:50.619455 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:28:50.619463 | orchestrator | Sunday 05 April 2026 00:28:38 +0000 (0:00:00.375) 0:00:13.716 ********** 2026-04-05 00:28:50.619471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:50.619501 | orchestrator | 2026-04-05 00:28:50.619509 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:28:50.619516 | orchestrator | Sunday 05 April 2026 00:28:38 +0000 (0:00:00.409) 0:00:14.126 ********** 2026-04-05 00:28:50.619524 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.619533 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.619540 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.619548 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.619555 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.619562 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.619570 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.619577 | orchestrator | 2026-04-05 00:28:50.619585 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:28:50.619592 | orchestrator | Sunday 05 April 2026 00:28:40 +0000 (0:00:01.382) 0:00:15.509 ********** 2026-04-05 00:28:50.619600 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:50.619607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:50.619614 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:50.619621 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:50.619628 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:50.619635 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:50.619642 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:50.619650 | orchestrator | 2026-04-05 00:28:50.619657 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:28:50.619664 | orchestrator | Sunday 05 April 2026 00:28:40 +0000 (0:00:00.239) 0:00:15.749 ********** 2026-04-05 00:28:50.619671 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.619687 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.619698 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.619708 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.619719 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.619728 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.619738 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.619744 | orchestrator | 2026-04-05 00:28:50.619751 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:28:50.619760 | orchestrator | Sunday 05 April 2026 00:28:41 +0000 (0:00:00.579) 0:00:16.328 ********** 2026-04-05 00:28:50.619767 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:50.619774 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:50.619784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:50.619793 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:50.619803 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:50.619812 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:50.619821 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:50.619829 | orchestrator | 2026-04-05 00:28:50.619836 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:28:50.619845 | orchestrator | Sunday 05 April 2026 00:28:41 +0000 (0:00:00.297) 0:00:16.625 ********** 2026-04-05 00:28:50.619854 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.619863 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:50.619872 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:50.619882 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:50.619890 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:50.619897 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:50.619904 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:50.619912 | orchestrator | 2026-04-05 00:28:50.619920 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:28:50.619928 | orchestrator | Sunday 05 April 2026 00:28:41 +0000 (0:00:00.581) 0:00:17.207 ********** 2026-04-05 00:28:50.619936 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.619944 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:50.619952 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:50.619970 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:50.619977 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:50.619984 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:50.619992 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:50.620000 | orchestrator | 2026-04-05 00:28:50.620007 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:28:50.620015 | orchestrator | Sunday 05 April 2026 00:28:43 +0000 (0:00:01.192) 0:00:18.399 ********** 2026-04-05 00:28:50.620023 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620031 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620038 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620046 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620054 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620062 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620069 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620077 | orchestrator | 2026-04-05 00:28:50.620083 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:28:50.620090 | orchestrator | Sunday 05 April 2026 00:28:44 +0000 (0:00:01.115) 0:00:19.515 ********** 2026-04-05 00:28:50.620127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:50.620136 | orchestrator | 2026-04-05 00:28:50.620143 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:28:50.620150 | orchestrator | Sunday 05 April 2026 00:28:44 +0000 (0:00:00.379) 0:00:19.894 ********** 2026-04-05 00:28:50.620157 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:50.620164 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:50.620171 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:50.620177 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:50.620184 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:50.620191 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:50.620198 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:50.620205 | orchestrator | 2026-04-05 00:28:50.620213 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:28:50.620220 | orchestrator | Sunday 05 April 2026 00:28:45 +0000 (0:00:01.274) 0:00:21.169 ********** 2026-04-05 00:28:50.620227 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620234 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620261 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620269 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620276 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620281 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620287 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620293 | orchestrator | 2026-04-05 00:28:50.620300 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:28:50.620306 | orchestrator | Sunday 05 April 2026 00:28:46 +0000 (0:00:00.280) 0:00:21.450 ********** 2026-04-05 00:28:50.620312 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620318 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620326 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620332 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620338 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620345 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620351 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620357 | orchestrator | 2026-04-05 00:28:50.620364 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:28:50.620371 | orchestrator | Sunday 05 April 2026 00:28:46 +0000 (0:00:00.253) 0:00:21.703 ********** 2026-04-05 00:28:50.620377 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620384 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620390 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620397 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620403 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620417 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620423 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620429 | orchestrator | 2026-04-05 00:28:50.620435 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:28:50.620441 | orchestrator | Sunday 05 April 2026 00:28:46 +0000 (0:00:00.251) 0:00:21.954 ********** 2026-04-05 00:28:50.620448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:50.620455 | orchestrator | 2026-04-05 00:28:50.620461 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:28:50.620466 | orchestrator | Sunday 05 April 2026 00:28:47 +0000 (0:00:00.337) 0:00:22.292 ********** 2026-04-05 00:28:50.620473 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620479 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620486 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620492 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620498 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620504 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620510 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620516 | orchestrator | 2026-04-05 00:28:50.620523 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:28:50.620530 | orchestrator | Sunday 05 April 2026 00:28:47 +0000 (0:00:00.599) 0:00:22.892 ********** 2026-04-05 00:28:50.620536 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:50.620543 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:50.620550 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:50.620557 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:50.620564 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:50.620570 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:50.620577 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:50.620583 | orchestrator | 2026-04-05 00:28:50.620590 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:28:50.620597 | orchestrator | Sunday 05 April 2026 00:28:47 +0000 (0:00:00.241) 0:00:23.133 ********** 2026-04-05 00:28:50.620604 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620610 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620617 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:50.620624 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620631 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:50.620638 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620644 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:50.620651 | orchestrator | 2026-04-05 00:28:50.620664 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:28:50.620671 | orchestrator | Sunday 05 April 2026 00:28:49 +0000 (0:00:01.091) 0:00:24.224 ********** 2026-04-05 00:28:50.620678 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:50.620684 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620690 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:50.620697 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620703 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:50.620710 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620715 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:50.620722 | orchestrator | 2026-04-05 00:28:50.620728 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:28:50.620734 | orchestrator | Sunday 05 April 2026 00:28:49 +0000 (0:00:00.561) 0:00:24.786 ********** 2026-04-05 00:28:50.620740 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:50.620746 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:50.620752 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:50.620758 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:50.620774 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.512653 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.512797 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.512815 | orchestrator | 2026-04-05 00:29:33.512828 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:29:33.512840 | orchestrator | Sunday 05 April 2026 00:28:50 +0000 (0:00:01.066) 0:00:25.853 ********** 2026-04-05 00:29:33.512851 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.512862 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.512873 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.512884 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:33.512895 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:33.512905 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.512916 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.512926 | orchestrator | 2026-04-05 00:29:33.512937 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-05 00:29:33.512948 | orchestrator | Sunday 05 April 2026 00:29:07 +0000 (0:00:16.694) 0:00:42.547 ********** 2026-04-05 00:29:33.512959 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.512970 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.512980 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.512991 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.513001 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.513012 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.513022 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.513033 | orchestrator | 2026-04-05 00:29:33.513043 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-05 00:29:33.513054 | orchestrator | Sunday 05 April 2026 00:29:07 +0000 (0:00:00.279) 0:00:42.826 ********** 2026-04-05 00:29:33.513065 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.513076 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.513086 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.513097 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.513107 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.513118 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.513128 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.513138 | orchestrator | 2026-04-05 00:29:33.513149 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-05 00:29:33.513161 | orchestrator | Sunday 05 April 2026 00:29:07 +0000 (0:00:00.319) 0:00:43.146 ********** 2026-04-05 00:29:33.513171 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.513182 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.513195 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.513207 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.513220 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.513260 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.513273 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.513286 | orchestrator | 2026-04-05 00:29:33.513299 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-05 00:29:33.513312 | orchestrator | Sunday 05 April 2026 00:29:08 +0000 (0:00:00.249) 0:00:43.395 ********** 2026-04-05 00:29:33.513326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:33.513342 | orchestrator | 2026-04-05 00:29:33.513355 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-05 00:29:33.513369 | orchestrator | Sunday 05 April 2026 00:29:08 +0000 (0:00:00.377) 0:00:43.773 ********** 2026-04-05 00:29:33.513382 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.513394 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.513407 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.513420 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.513433 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.513445 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.513458 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.513471 | orchestrator | 2026-04-05 00:29:33.513484 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-05 00:29:33.513506 | orchestrator | Sunday 05 April 2026 00:29:10 +0000 (0:00:01.899) 0:00:45.672 ********** 2026-04-05 00:29:33.513520 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:33.513533 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.513546 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.513558 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:33.513571 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:33.513584 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:33.513596 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:33.513607 | orchestrator | 2026-04-05 00:29:33.513617 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-05 00:29:33.513628 | orchestrator | Sunday 05 April 2026 00:29:11 +0000 (0:00:01.096) 0:00:46.769 ********** 2026-04-05 00:29:33.513639 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.513650 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.513660 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.513671 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.513682 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.513692 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.513703 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.513714 | orchestrator | 2026-04-05 00:29:33.513725 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-05 00:29:33.513755 | orchestrator | Sunday 05 April 2026 00:29:12 +0000 (0:00:00.843) 0:00:47.612 ********** 2026-04-05 00:29:33.513777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:33.513795 | orchestrator | 2026-04-05 00:29:33.513823 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-05 00:29:33.513843 | orchestrator | Sunday 05 April 2026 00:29:12 +0000 (0:00:00.347) 0:00:47.960 ********** 2026-04-05 00:29:33.513860 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:33.513878 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:33.513897 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:33.513909 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:33.513920 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.513931 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.513941 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:33.513952 | orchestrator | 2026-04-05 00:29:33.513981 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-05 00:29:33.513993 | orchestrator | Sunday 05 April 2026 00:29:13 +0000 (0:00:01.048) 0:00:49.009 ********** 2026-04-05 00:29:33.514004 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:29:33.514080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:29:33.514095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:29:33.514106 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:29:33.514117 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:29:33.514128 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:29:33.514138 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:29:33.514149 | orchestrator | 2026-04-05 00:29:33.514160 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-05 00:29:33.514171 | orchestrator | Sunday 05 April 2026 00:29:14 +0000 (0:00:00.266) 0:00:49.275 ********** 2026-04-05 00:29:33.514182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:33.514193 | orchestrator | 2026-04-05 00:29:33.514204 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-05 00:29:33.514215 | orchestrator | Sunday 05 April 2026 00:29:14 +0000 (0:00:00.304) 0:00:49.580 ********** 2026-04-05 00:29:33.514226 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.514284 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.514307 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.514318 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.514329 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.514339 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.514350 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.514360 | orchestrator | 2026-04-05 00:29:33.514371 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-05 00:29:33.514382 | orchestrator | Sunday 05 April 2026 00:29:16 +0000 (0:00:01.762) 0:00:51.342 ********** 2026-04-05 00:29:33.514393 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.514404 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.514415 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:33.514425 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:33.514436 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:33.514447 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:33.514457 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:33.514467 | orchestrator | 2026-04-05 00:29:33.514478 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-05 00:29:33.514489 | orchestrator | Sunday 05 April 2026 00:29:17 +0000 (0:00:01.206) 0:00:52.548 ********** 2026-04-05 00:29:33.514500 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:29:33.514510 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:29:33.514521 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:29:33.514531 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:29:33.514542 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:29:33.514552 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:29:33.514563 | orchestrator | changed: [testbed-manager] 2026-04-05 00:29:33.514573 | orchestrator | 2026-04-05 00:29:33.514584 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-05 00:29:33.514595 | orchestrator | Sunday 05 April 2026 00:29:31 +0000 (0:00:13.674) 0:01:06.223 ********** 2026-04-05 00:29:33.514605 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.514616 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.514627 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.514637 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.514648 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.514658 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.514669 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.514679 | orchestrator | 2026-04-05 00:29:33.514690 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-05 00:29:33.514701 | orchestrator | Sunday 05 April 2026 00:29:31 +0000 (0:00:00.662) 0:01:06.885 ********** 2026-04-05 00:29:33.514712 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.514722 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.514733 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.514743 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.514754 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.514764 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.514775 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.514785 | orchestrator | 2026-04-05 00:29:33.514796 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-05 00:29:33.514807 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.920) 0:01:07.806 ********** 2026-04-05 00:29:33.514817 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.514828 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.514839 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.514849 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.514859 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.514870 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.514881 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.514891 | orchestrator | 2026-04-05 00:29:33.514902 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-05 00:29:33.514913 | orchestrator | Sunday 05 April 2026 00:29:32 +0000 (0:00:00.261) 0:01:08.068 ********** 2026-04-05 00:29:33.514924 | orchestrator | ok: [testbed-manager] 2026-04-05 00:29:33.514941 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:29:33.514952 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:29:33.514963 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:29:33.514974 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:29:33.514984 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:29:33.514995 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:29:33.515005 | orchestrator | 2026-04-05 00:29:33.515016 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-05 00:29:33.515042 | orchestrator | Sunday 05 April 2026 00:29:33 +0000 (0:00:00.265) 0:01:08.333 ********** 2026-04-05 00:29:33.515065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:29:33.515077 | orchestrator | 2026-04-05 00:29:33.515096 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-05 00:31:58.404098 | orchestrator | Sunday 05 April 2026 00:29:33 +0000 (0:00:00.378) 0:01:08.712 ********** 2026-04-05 00:31:58.404218 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.404236 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.404247 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.404258 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.404268 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.404279 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.404289 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.404298 | orchestrator | 2026-04-05 00:31:58.404308 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-05 00:31:58.404319 | orchestrator | Sunday 05 April 2026 00:29:35 +0000 (0:00:01.732) 0:01:10.444 ********** 2026-04-05 00:31:58.404329 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:31:58.404341 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:31:58.404352 | orchestrator | changed: [testbed-manager] 2026-04-05 00:31:58.404362 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:31:58.404372 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:31:58.404382 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:31:58.404392 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:31:58.404402 | orchestrator | 2026-04-05 00:31:58.404435 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-05 00:31:58.404447 | orchestrator | Sunday 05 April 2026 00:29:35 +0000 (0:00:00.576) 0:01:11.021 ********** 2026-04-05 00:31:58.404456 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.404467 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.404477 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.404488 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.404498 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.404508 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.404519 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.404529 | orchestrator | 2026-04-05 00:31:58.404539 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-05 00:31:58.404549 | orchestrator | Sunday 05 April 2026 00:29:36 +0000 (0:00:00.270) 0:01:11.291 ********** 2026-04-05 00:31:58.404560 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.404569 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.404580 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.404590 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.404601 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.404611 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.404621 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.404631 | orchestrator | 2026-04-05 00:31:58.404642 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-05 00:31:58.404703 | orchestrator | Sunday 05 April 2026 00:29:37 +0000 (0:00:01.192) 0:01:12.484 ********** 2026-04-05 00:31:58.404718 | orchestrator | changed: [testbed-manager] 2026-04-05 00:31:58.404734 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:31:58.404744 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:31:58.404783 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:31:58.404795 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:31:58.404805 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:31:58.404815 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:31:58.404825 | orchestrator | 2026-04-05 00:31:58.404836 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-05 00:31:58.404847 | orchestrator | Sunday 05 April 2026 00:29:38 +0000 (0:00:01.708) 0:01:14.193 ********** 2026-04-05 00:31:58.404857 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.404870 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.404881 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.404891 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.404904 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.404916 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.404927 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.404938 | orchestrator | 2026-04-05 00:31:58.404949 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-05 00:31:58.404961 | orchestrator | Sunday 05 April 2026 00:29:41 +0000 (0:00:02.345) 0:01:16.538 ********** 2026-04-05 00:31:58.404973 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.404986 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.404996 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.405007 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.405017 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.405028 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.405038 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.405049 | orchestrator | 2026-04-05 00:31:58.405059 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-05 00:31:58.405069 | orchestrator | Sunday 05 April 2026 00:30:17 +0000 (0:00:36.616) 0:01:53.155 ********** 2026-04-05 00:31:58.405079 | orchestrator | changed: [testbed-manager] 2026-04-05 00:31:58.405090 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:31:58.405101 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:31:58.405112 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:31:58.405122 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:31:58.405132 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:31:58.405142 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:31:58.405153 | orchestrator | 2026-04-05 00:31:58.405163 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-05 00:31:58.405174 | orchestrator | Sunday 05 April 2026 00:31:41 +0000 (0:01:23.370) 0:03:16.526 ********** 2026-04-05 00:31:58.405185 | orchestrator | ok: [testbed-manager] 2026-04-05 00:31:58.405195 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.405214 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.405224 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.405234 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.405243 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.405254 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.405264 | orchestrator | 2026-04-05 00:31:58.405275 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-05 00:31:58.405284 | orchestrator | Sunday 05 April 2026 00:31:43 +0000 (0:00:01.795) 0:03:18.321 ********** 2026-04-05 00:31:58.405293 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:31:58.405304 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:31:58.405314 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:31:58.405322 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:31:58.405330 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:31:58.405340 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:31:58.405349 | orchestrator | changed: [testbed-manager] 2026-04-05 00:31:58.405359 | orchestrator | 2026-04-05 00:31:58.405368 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-05 00:31:58.405378 | orchestrator | Sunday 05 April 2026 00:31:56 +0000 (0:00:13.107) 0:03:31.429 ********** 2026-04-05 00:31:58.405418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-05 00:31:58.405445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-05 00:31:58.405458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-05 00:31:58.405472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:31:58.405483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:31:58.405491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-05 00:31:58.405501 | orchestrator | 2026-04-05 00:31:58.405511 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-05 00:31:58.405520 | orchestrator | Sunday 05 April 2026 00:31:56 +0000 (0:00:00.424) 0:03:31.853 ********** 2026-04-05 00:31:58.405529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:31:58.405539 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:31:58.405548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:31:58.405557 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:31:58.405565 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:31:58.405574 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:31:58.405583 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:31:58.405592 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:31:58.405601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:31:58.405610 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:31:58.405619 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:31:58.405628 | orchestrator | 2026-04-05 00:31:58.405642 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-05 00:31:58.405651 | orchestrator | Sunday 05 April 2026 00:31:58 +0000 (0:00:01.669) 0:03:33.523 ********** 2026-04-05 00:31:58.405691 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:31:58.405703 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:31:58.405712 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:31:58.405722 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:31:58.405732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:31:58.405750 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:07.455605 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:07.455814 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:07.455830 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:07.455838 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:07.455847 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:07.455857 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:07.455864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:07.455872 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:07.455880 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:07.455887 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:07.455894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:07.455902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:07.455909 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:07.455916 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:07.455923 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:07.455931 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:07.455938 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:07.455945 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:07.455952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:07.455960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:07.455967 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:07.455974 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:07.455981 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:07.455988 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:32:07.455996 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:07.456003 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:32:07.456010 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:07.456038 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:07.456046 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:32:07.456054 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:07.456061 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:32:07.456068 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:32:07.456075 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:32:07.456082 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:32:07.456089 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:32:07.456097 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:32:07.456104 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:32:07.456111 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:07.456118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:07.456125 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:07.456132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:32:07.456139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:07.456147 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:07.456169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:07.456178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:07.456187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456214 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:07.456232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:07.456240 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:32:07.456248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:07.456257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:07.456266 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:07.456275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:07.456285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:07.456294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:07.456303 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:32:07.456312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:07.456327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:07.456336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:32:07.456354 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:32:07.456363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:32:07.456371 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:32:07.456380 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:32:07.456387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:32:07.456395 | orchestrator | 2026-04-05 00:32:07.456403 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-05 00:32:07.456410 | orchestrator | Sunday 05 April 2026 00:32:06 +0000 (0:00:07.898) 0:03:41.421 ********** 2026-04-05 00:32:07.456417 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456439 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456467 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:32:07.456482 | orchestrator | 2026-04-05 00:32:07.456490 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-05 00:32:07.456497 | orchestrator | Sunday 05 April 2026 00:32:06 +0000 (0:00:00.617) 0:03:42.038 ********** 2026-04-05 00:32:07.456508 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:07.456515 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:07.456522 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:07.456530 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:07.456537 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:07.456544 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:07.456551 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:07.456559 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:07.456566 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:07.456573 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:07.456586 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:20.381480 | orchestrator | 2026-04-05 00:32:20.382450 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-05 00:32:20.382489 | orchestrator | Sunday 05 April 2026 00:32:07 +0000 (0:00:00.654) 0:03:42.693 ********** 2026-04-05 00:32:20.382503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:20.382514 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:20.382526 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:20.382560 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:20.382570 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:20.382580 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:20.382590 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:32:20.382599 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:20.382609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:20.382619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:20.382628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:32:20.382638 | orchestrator | 2026-04-05 00:32:20.382648 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-05 00:32:20.382657 | orchestrator | Sunday 05 April 2026 00:32:07 +0000 (0:00:00.492) 0:03:43.185 ********** 2026-04-05 00:32:20.382668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:20.382677 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:20.382687 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:20.382697 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:20.382707 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:20.382750 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:20.382769 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:32:20.382785 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:20.382802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:20.382812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:20.382822 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:32:20.382831 | orchestrator | 2026-04-05 00:32:20.382841 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-05 00:32:20.382850 | orchestrator | Sunday 05 April 2026 00:32:08 +0000 (0:00:00.659) 0:03:43.845 ********** 2026-04-05 00:32:20.382860 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:20.382870 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:20.382884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:20.382900 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:20.382917 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:20.382931 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:20.382948 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:20.382962 | orchestrator | 2026-04-05 00:32:20.383098 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-05 00:32:20.383121 | orchestrator | Sunday 05 April 2026 00:32:08 +0000 (0:00:00.335) 0:03:44.181 ********** 2026-04-05 00:32:20.383138 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:20.383155 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:20.383172 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:20.383188 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:20.383205 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:20.383222 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:20.383238 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:20.383255 | orchestrator | 2026-04-05 00:32:20.383271 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-05 00:32:20.383288 | orchestrator | Sunday 05 April 2026 00:32:14 +0000 (0:00:05.911) 0:03:50.092 ********** 2026-04-05 00:32:20.383304 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-05 00:32:20.383338 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-05 00:32:20.383370 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:20.383386 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-05 00:32:20.383401 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:20.383415 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-05 00:32:20.383430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:20.383445 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:20.383461 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-05 00:32:20.383553 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-05 00:32:20.383573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:20.383591 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:20.383607 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-05 00:32:20.383622 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:20.383638 | orchestrator | 2026-04-05 00:32:20.383653 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-05 00:32:20.383670 | orchestrator | Sunday 05 April 2026 00:32:15 +0000 (0:00:00.318) 0:03:50.411 ********** 2026-04-05 00:32:20.383687 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-05 00:32:20.383703 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-05 00:32:20.383747 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-05 00:32:20.383842 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-05 00:32:20.383859 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-05 00:32:20.383869 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-05 00:32:20.383878 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-05 00:32:20.383888 | orchestrator | 2026-04-05 00:32:20.383898 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-05 00:32:20.383907 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:01.086) 0:03:51.498 ********** 2026-04-05 00:32:20.383919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:20.383932 | orchestrator | 2026-04-05 00:32:20.383941 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-05 00:32:20.383951 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:00.403) 0:03:51.901 ********** 2026-04-05 00:32:20.383960 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:20.383970 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:20.383979 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:20.383988 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:20.383998 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:20.384007 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:20.384016 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:20.384026 | orchestrator | 2026-04-05 00:32:20.384035 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-05 00:32:20.384045 | orchestrator | Sunday 05 April 2026 00:32:18 +0000 (0:00:01.334) 0:03:53.236 ********** 2026-04-05 00:32:20.384054 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:20.384064 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:20.384073 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:20.384082 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:20.384092 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:20.384101 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:20.384110 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:20.384119 | orchestrator | 2026-04-05 00:32:20.384129 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-05 00:32:20.384139 | orchestrator | Sunday 05 April 2026 00:32:18 +0000 (0:00:00.597) 0:03:53.833 ********** 2026-04-05 00:32:20.384148 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:20.384158 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:20.384167 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:20.384188 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:20.384198 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:20.384208 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:20.384217 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:20.384226 | orchestrator | 2026-04-05 00:32:20.384236 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-05 00:32:20.384246 | orchestrator | Sunday 05 April 2026 00:32:19 +0000 (0:00:00.657) 0:03:54.490 ********** 2026-04-05 00:32:20.384255 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:20.384265 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:20.384274 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:20.384284 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:20.384293 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:20.384302 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:20.384312 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:20.384321 | orchestrator | 2026-04-05 00:32:20.384331 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-05 00:32:20.384341 | orchestrator | Sunday 05 April 2026 00:32:19 +0000 (0:00:00.550) 0:03:55.041 ********** 2026-04-05 00:32:20.384356 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347518.1868825, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:20.384377 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347526.604576, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:20.384388 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347545.078947, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:20.384423 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347544.3562691, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139801 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347527.5503545, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139928 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347553.199258, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139942 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347538.0072494, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139951 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139960 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139968 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.139976 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.140011 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.140027 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.140035 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:32:26.140044 | orchestrator | 2026-04-05 00:32:26.140054 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-05 00:32:26.140063 | orchestrator | Sunday 05 April 2026 00:32:20 +0000 (0:00:01.018) 0:03:56.059 ********** 2026-04-05 00:32:26.140072 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:26.140081 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:26.140089 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:26.140096 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:26.140104 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:26.140112 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:26.140120 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:26.140128 | orchestrator | 2026-04-05 00:32:26.140136 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-05 00:32:26.140144 | orchestrator | Sunday 05 April 2026 00:32:22 +0000 (0:00:01.212) 0:03:57.272 ********** 2026-04-05 00:32:26.140152 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:26.140160 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:26.140167 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:26.140178 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:26.140192 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:26.140205 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:26.140220 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:26.140234 | orchestrator | 2026-04-05 00:32:26.140257 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-05 00:32:26.140270 | orchestrator | Sunday 05 April 2026 00:32:23 +0000 (0:00:01.168) 0:03:58.441 ********** 2026-04-05 00:32:26.140282 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:26.140296 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:26.140312 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:26.140325 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:26.140340 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:26.140354 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:26.140371 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:26.140385 | orchestrator | 2026-04-05 00:32:26.140404 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-05 00:32:26.140418 | orchestrator | Sunday 05 April 2026 00:32:24 +0000 (0:00:01.309) 0:03:59.751 ********** 2026-04-05 00:32:26.140432 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:26.140453 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:26.140471 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:26.140492 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:26.140514 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:26.140539 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:26.140554 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:26.140601 | orchestrator | 2026-04-05 00:32:26.140623 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-05 00:32:26.140641 | orchestrator | Sunday 05 April 2026 00:32:24 +0000 (0:00:00.297) 0:04:00.048 ********** 2026-04-05 00:32:26.140671 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:26.140698 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:26.140714 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:26.140752 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:26.140765 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:26.140777 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:26.140789 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:26.140801 | orchestrator | 2026-04-05 00:32:26.140814 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-05 00:32:26.140826 | orchestrator | Sunday 05 April 2026 00:32:25 +0000 (0:00:00.861) 0:04:00.909 ********** 2026-04-05 00:32:26.140841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:26.140856 | orchestrator | 2026-04-05 00:32:26.140868 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-05 00:32:26.140888 | orchestrator | Sunday 05 April 2026 00:32:26 +0000 (0:00:00.435) 0:04:01.344 ********** 2026-04-05 00:33:49.163281 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163373 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:49.163385 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:49.163395 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:49.163403 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:49.163411 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:49.163419 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:49.163428 | orchestrator | 2026-04-05 00:33:49.163437 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-05 00:33:49.163447 | orchestrator | Sunday 05 April 2026 00:32:36 +0000 (0:00:10.125) 0:04:11.469 ********** 2026-04-05 00:33:49.163455 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163463 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163471 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163479 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.163487 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.163495 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163502 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163510 | orchestrator | 2026-04-05 00:33:49.163519 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-05 00:33:49.163527 | orchestrator | Sunday 05 April 2026 00:32:37 +0000 (0:00:01.237) 0:04:12.707 ********** 2026-04-05 00:33:49.163534 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163542 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163550 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.163558 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163566 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.163574 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163582 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163590 | orchestrator | 2026-04-05 00:33:49.163598 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-05 00:33:49.163606 | orchestrator | Sunday 05 April 2026 00:32:38 +0000 (0:00:01.012) 0:04:13.719 ********** 2026-04-05 00:33:49.163614 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163622 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163630 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.163638 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163646 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.163653 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163661 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163669 | orchestrator | 2026-04-05 00:33:49.163677 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-05 00:33:49.163705 | orchestrator | Sunday 05 April 2026 00:32:38 +0000 (0:00:00.354) 0:04:14.074 ********** 2026-04-05 00:33:49.163714 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163721 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163729 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.163737 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163745 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.163753 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163761 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163769 | orchestrator | 2026-04-05 00:33:49.163777 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-05 00:33:49.163785 | orchestrator | Sunday 05 April 2026 00:32:39 +0000 (0:00:00.319) 0:04:14.394 ********** 2026-04-05 00:33:49.163793 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163801 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163809 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.163817 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163825 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.163834 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163844 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163853 | orchestrator | 2026-04-05 00:33:49.163863 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-05 00:33:49.163872 | orchestrator | Sunday 05 April 2026 00:32:39 +0000 (0:00:00.362) 0:04:14.756 ********** 2026-04-05 00:33:49.163881 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.163890 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.163899 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.163908 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.163981 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.163992 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.164001 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.164011 | orchestrator | 2026-04-05 00:33:49.164020 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-05 00:33:49.164043 | orchestrator | Sunday 05 April 2026 00:32:45 +0000 (0:00:05.699) 0:04:20.455 ********** 2026-04-05 00:33:49.164055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:49.164066 | orchestrator | 2026-04-05 00:33:49.164076 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-05 00:33:49.164085 | orchestrator | Sunday 05 April 2026 00:32:45 +0000 (0:00:00.427) 0:04:20.883 ********** 2026-04-05 00:33:49.164094 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164103 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-05 00:33:49.164113 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164122 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-05 00:33:49.164132 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:49.164142 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164151 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-05 00:33:49.164161 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:49.164170 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164179 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:49.164188 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-05 00:33:49.164199 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164207 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-05 00:33:49.164215 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:49.164223 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:49.164231 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164253 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-05 00:33:49.164269 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:49.164277 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-05 00:33:49.164285 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-05 00:33:49.164293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:49.164301 | orchestrator | 2026-04-05 00:33:49.164308 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-05 00:33:49.164316 | orchestrator | Sunday 05 April 2026 00:32:46 +0000 (0:00:00.352) 0:04:21.236 ********** 2026-04-05 00:33:49.164324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:49.164332 | orchestrator | 2026-04-05 00:33:49.164340 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-05 00:33:49.164348 | orchestrator | Sunday 05 April 2026 00:32:46 +0000 (0:00:00.630) 0:04:21.866 ********** 2026-04-05 00:33:49.164356 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-05 00:33:49.164364 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:49.164372 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-05 00:33:49.164380 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:49.164388 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-05 00:33:49.164396 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-05 00:33:49.164404 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:49.164412 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:49.164419 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-05 00:33:49.164427 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-05 00:33:49.164435 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:49.164443 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:49.164451 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-05 00:33:49.164459 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:49.164467 | orchestrator | 2026-04-05 00:33:49.164475 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-05 00:33:49.164483 | orchestrator | Sunday 05 April 2026 00:32:46 +0000 (0:00:00.318) 0:04:22.185 ********** 2026-04-05 00:33:49.164491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:49.164499 | orchestrator | 2026-04-05 00:33:49.164506 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-05 00:33:49.164514 | orchestrator | Sunday 05 April 2026 00:32:47 +0000 (0:00:00.432) 0:04:22.618 ********** 2026-04-05 00:33:49.164522 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:49.164530 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:49.164538 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:49.164546 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:49.164554 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:49.164562 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:49.164570 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:49.164577 | orchestrator | 2026-04-05 00:33:49.164585 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-05 00:33:49.164593 | orchestrator | Sunday 05 April 2026 00:33:23 +0000 (0:00:36.483) 0:04:59.101 ********** 2026-04-05 00:33:49.164601 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:49.164609 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:49.164617 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:49.164629 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:49.164637 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:49.164650 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:49.164658 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:49.164665 | orchestrator | 2026-04-05 00:33:49.164673 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-05 00:33:49.164681 | orchestrator | Sunday 05 April 2026 00:33:32 +0000 (0:00:08.638) 0:05:07.740 ********** 2026-04-05 00:33:49.164689 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:49.164697 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:49.164705 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:49.164712 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:49.164720 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:49.164728 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:49.164736 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:49.164744 | orchestrator | 2026-04-05 00:33:49.164752 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-05 00:33:49.164760 | orchestrator | Sunday 05 April 2026 00:33:40 +0000 (0:00:08.260) 0:05:16.000 ********** 2026-04-05 00:33:49.164767 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:49.164775 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:49.164783 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:49.164791 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:49.164799 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:49.164807 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:49.164815 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:49.164823 | orchestrator | 2026-04-05 00:33:49.164830 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-05 00:33:49.164838 | orchestrator | Sunday 05 April 2026 00:33:42 +0000 (0:00:01.732) 0:05:17.732 ********** 2026-04-05 00:33:49.164847 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:49.164854 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:49.164862 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:49.164870 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:49.164878 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:49.164886 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:49.164894 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:49.164902 | orchestrator | 2026-04-05 00:33:49.164931 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-05 00:34:00.757586 | orchestrator | Sunday 05 April 2026 00:33:49 +0000 (0:00:06.635) 0:05:24.367 ********** 2026-04-05 00:34:00.757722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:00.757750 | orchestrator | 2026-04-05 00:34:00.757769 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-05 00:34:00.757787 | orchestrator | Sunday 05 April 2026 00:33:49 +0000 (0:00:00.440) 0:05:24.808 ********** 2026-04-05 00:34:00.757803 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:00.757822 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:00.757838 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:00.757854 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:00.757869 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:00.757885 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:00.757900 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:00.757914 | orchestrator | 2026-04-05 00:34:00.757929 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-05 00:34:00.757972 | orchestrator | Sunday 05 April 2026 00:33:50 +0000 (0:00:00.726) 0:05:25.534 ********** 2026-04-05 00:34:00.757988 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:00.758005 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:00.758083 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:00.758103 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:00.758120 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:00.758137 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:00.758230 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:00.758252 | orchestrator | 2026-04-05 00:34:00.758272 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-05 00:34:00.758289 | orchestrator | Sunday 05 April 2026 00:33:52 +0000 (0:00:01.842) 0:05:27.377 ********** 2026-04-05 00:34:00.758307 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:00.758325 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:00.758342 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:00.758360 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:00.758378 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:00.758395 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:00.758411 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:00.758427 | orchestrator | 2026-04-05 00:34:00.758443 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-05 00:34:00.758459 | orchestrator | Sunday 05 April 2026 00:33:52 +0000 (0:00:00.782) 0:05:28.160 ********** 2026-04-05 00:34:00.758477 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.758494 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.758512 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.758531 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.758549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:00.758566 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:00.758582 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:00.758599 | orchestrator | 2026-04-05 00:34:00.758616 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-05 00:34:00.758631 | orchestrator | Sunday 05 April 2026 00:33:53 +0000 (0:00:00.272) 0:05:28.432 ********** 2026-04-05 00:34:00.758648 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.758663 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.758677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.758692 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.758706 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:00.758721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:00.758737 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:00.758753 | orchestrator | 2026-04-05 00:34:00.758769 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-05 00:34:00.758784 | orchestrator | Sunday 05 April 2026 00:33:53 +0000 (0:00:00.410) 0:05:28.843 ********** 2026-04-05 00:34:00.758799 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:00.758815 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:00.758849 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:00.758867 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:00.758883 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:00.758897 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:00.758913 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:00.758928 | orchestrator | 2026-04-05 00:34:00.758974 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-05 00:34:00.758989 | orchestrator | Sunday 05 April 2026 00:33:54 +0000 (0:00:00.447) 0:05:29.291 ********** 2026-04-05 00:34:00.759004 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.759020 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.759036 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.759052 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.759068 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:00.759084 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:00.759100 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:00.759117 | orchestrator | 2026-04-05 00:34:00.759134 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-05 00:34:00.759152 | orchestrator | Sunday 05 April 2026 00:33:54 +0000 (0:00:00.287) 0:05:29.578 ********** 2026-04-05 00:34:00.759167 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:00.759183 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:00.759199 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:00.759214 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:00.759246 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:00.759261 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:00.759278 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:00.759294 | orchestrator | 2026-04-05 00:34:00.759310 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-05 00:34:00.759326 | orchestrator | Sunday 05 April 2026 00:33:54 +0000 (0:00:00.330) 0:05:29.909 ********** 2026-04-05 00:34:00.759343 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:34:00.759360 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759376 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:34:00.759391 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759407 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:34:00.759423 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759438 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:34:00.759454 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759498 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:34:00.759515 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759530 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:34:00.759546 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759561 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:34:00.759576 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:34:00.759591 | orchestrator | 2026-04-05 00:34:00.759607 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-05 00:34:00.759623 | orchestrator | Sunday 05 April 2026 00:33:55 +0000 (0:00:00.305) 0:05:30.214 ********** 2026-04-05 00:34:00.759639 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:34:00.759655 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759670 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:34:00.759685 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759701 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:34:00.759717 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759732 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:34:00.759748 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759764 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:34:00.759779 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759794 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:34:00.759810 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759825 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:34:00.759841 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:34:00.759856 | orchestrator | 2026-04-05 00:34:00.759873 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-05 00:34:00.759889 | orchestrator | Sunday 05 April 2026 00:33:55 +0000 (0:00:00.285) 0:05:30.499 ********** 2026-04-05 00:34:00.759905 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.759921 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.759962 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.759980 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.759997 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:00.760012 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:00.760028 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:00.760044 | orchestrator | 2026-04-05 00:34:00.760060 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-05 00:34:00.760076 | orchestrator | Sunday 05 April 2026 00:33:55 +0000 (0:00:00.306) 0:05:30.806 ********** 2026-04-05 00:34:00.760093 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.760109 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.760126 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.760136 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.760146 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:00.760155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:00.760165 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:00.760174 | orchestrator | 2026-04-05 00:34:00.760184 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-05 00:34:00.760209 | orchestrator | Sunday 05 April 2026 00:33:55 +0000 (0:00:00.275) 0:05:31.081 ********** 2026-04-05 00:34:00.760221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:00.760233 | orchestrator | 2026-04-05 00:34:00.760243 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-05 00:34:00.760253 | orchestrator | Sunday 05 April 2026 00:33:56 +0000 (0:00:00.475) 0:05:31.557 ********** 2026-04-05 00:34:00.760262 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:00.760272 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:00.760282 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:00.760291 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:00.760301 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:00.760310 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:00.760320 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:00.760329 | orchestrator | 2026-04-05 00:34:00.760341 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-05 00:34:00.760357 | orchestrator | Sunday 05 April 2026 00:33:57 +0000 (0:00:00.851) 0:05:32.408 ********** 2026-04-05 00:34:00.760373 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:00.760389 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:00.760404 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:00.760419 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:00.760433 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:00.760449 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:00.760465 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:00.760481 | orchestrator | 2026-04-05 00:34:00.760497 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-05 00:34:00.760514 | orchestrator | Sunday 05 April 2026 00:34:00 +0000 (0:00:03.152) 0:05:35.561 ********** 2026-04-05 00:34:00.760531 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-05 00:34:00.760548 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-05 00:34:00.760565 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-05 00:34:00.760582 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:00.760598 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-05 00:34:00.760616 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-05 00:34:00.760632 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-05 00:34:00.760648 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:00.760665 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-05 00:34:00.760681 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-05 00:34:00.760697 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-05 00:34:00.760713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:00.760730 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-05 00:34:00.760745 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-05 00:34:00.760761 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-05 00:34:00.760777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:00.760811 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-05 00:35:02.979499 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-05 00:35:02.979600 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-05 00:35:02.979617 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:02.979672 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-05 00:35:02.979714 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-05 00:35:02.979725 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-05 00:35:02.979736 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:02.979766 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-05 00:35:02.979778 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-05 00:35:02.979789 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-05 00:35:02.979799 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:02.979811 | orchestrator | 2026-04-05 00:35:02.979823 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-05 00:35:02.979835 | orchestrator | Sunday 05 April 2026 00:34:00 +0000 (0:00:00.640) 0:05:36.202 ********** 2026-04-05 00:35:02.979846 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.979857 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.979868 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.979879 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.979889 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.979900 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.979911 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.979921 | orchestrator | 2026-04-05 00:35:02.979932 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-05 00:35:02.979943 | orchestrator | Sunday 05 April 2026 00:34:07 +0000 (0:00:06.945) 0:05:43.148 ********** 2026-04-05 00:35:02.979954 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.979965 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.979975 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.979986 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.979996 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980007 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980018 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980028 | orchestrator | 2026-04-05 00:35:02.980039 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-05 00:35:02.980116 | orchestrator | Sunday 05 April 2026 00:34:08 +0000 (0:00:01.058) 0:05:44.206 ********** 2026-04-05 00:35:02.980131 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.980145 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980157 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980170 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980184 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980197 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980210 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980220 | orchestrator | 2026-04-05 00:35:02.980231 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-05 00:35:02.980242 | orchestrator | Sunday 05 April 2026 00:34:17 +0000 (0:00:08.509) 0:05:52.716 ********** 2026-04-05 00:35:02.980252 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:02.980263 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980273 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980284 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980294 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980305 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980315 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980326 | orchestrator | 2026-04-05 00:35:02.980336 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-05 00:35:02.980347 | orchestrator | Sunday 05 April 2026 00:34:21 +0000 (0:00:03.750) 0:05:56.467 ********** 2026-04-05 00:35:02.980358 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.980368 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980379 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980389 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980400 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980411 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980426 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980437 | orchestrator | 2026-04-05 00:35:02.980448 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-05 00:35:02.980459 | orchestrator | Sunday 05 April 2026 00:34:22 +0000 (0:00:01.328) 0:05:57.796 ********** 2026-04-05 00:35:02.980478 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.980488 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980499 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980510 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980520 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980531 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980541 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980552 | orchestrator | 2026-04-05 00:35:02.980563 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-05 00:35:02.980573 | orchestrator | Sunday 05 April 2026 00:34:24 +0000 (0:00:01.446) 0:05:59.242 ********** 2026-04-05 00:35:02.980584 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:02.980594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:02.980605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:02.980616 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:02.980626 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:02.980637 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:02.980647 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:02.980658 | orchestrator | 2026-04-05 00:35:02.980669 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-05 00:35:02.980679 | orchestrator | Sunday 05 April 2026 00:34:24 +0000 (0:00:00.644) 0:05:59.887 ********** 2026-04-05 00:35:02.980690 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.980700 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980711 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980721 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980732 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980743 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980753 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980764 | orchestrator | 2026-04-05 00:35:02.980775 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-05 00:35:02.980802 | orchestrator | Sunday 05 April 2026 00:34:35 +0000 (0:00:10.539) 0:06:10.426 ********** 2026-04-05 00:35:02.980813 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:02.980824 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980835 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980845 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980856 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980867 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980877 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980888 | orchestrator | 2026-04-05 00:35:02.980899 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-05 00:35:02.980910 | orchestrator | Sunday 05 April 2026 00:34:36 +0000 (0:00:00.916) 0:06:11.343 ********** 2026-04-05 00:35:02.980920 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.980931 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.980941 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.980952 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.980963 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.980973 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.980984 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.980994 | orchestrator | 2026-04-05 00:35:02.981005 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-05 00:35:02.981016 | orchestrator | Sunday 05 April 2026 00:34:45 +0000 (0:00:09.382) 0:06:20.725 ********** 2026-04-05 00:35:02.981027 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.981037 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.981063 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.981074 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.981085 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.981095 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.981106 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.981117 | orchestrator | 2026-04-05 00:35:02.981133 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-05 00:35:02.981144 | orchestrator | Sunday 05 April 2026 00:34:56 +0000 (0:00:11.243) 0:06:31.969 ********** 2026-04-05 00:35:02.981155 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-05 00:35:02.981166 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-05 00:35:02.981176 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-05 00:35:02.981187 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-05 00:35:02.981198 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-05 00:35:02.981209 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-05 00:35:02.981219 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-05 00:35:02.981230 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-05 00:35:02.981240 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-05 00:35:02.981251 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-05 00:35:02.981262 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-05 00:35:02.981273 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-05 00:35:02.981283 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-05 00:35:02.981294 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-05 00:35:02.981305 | orchestrator | 2026-04-05 00:35:02.981316 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-05 00:35:02.981327 | orchestrator | Sunday 05 April 2026 00:34:57 +0000 (0:00:01.223) 0:06:33.193 ********** 2026-04-05 00:35:02.981338 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:02.981348 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:02.981359 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:02.981370 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:02.981381 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:02.981392 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:02.981402 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:02.981413 | orchestrator | 2026-04-05 00:35:02.981424 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-05 00:35:02.981435 | orchestrator | Sunday 05 April 2026 00:34:58 +0000 (0:00:00.710) 0:06:33.903 ********** 2026-04-05 00:35:02.981446 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:02.981457 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:02.981468 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:02.981479 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:02.981489 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:02.981500 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:02.981511 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:02.981522 | orchestrator | 2026-04-05 00:35:02.981532 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-05 00:35:02.981544 | orchestrator | Sunday 05 April 2026 00:35:02 +0000 (0:00:03.615) 0:06:37.519 ********** 2026-04-05 00:35:02.981555 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:02.981565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:02.981576 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:02.981587 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:02.981598 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:02.981608 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:02.981619 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:02.981629 | orchestrator | 2026-04-05 00:35:02.981641 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-05 00:35:02.981662 | orchestrator | Sunday 05 April 2026 00:35:02 +0000 (0:00:00.435) 0:06:37.954 ********** 2026-04-05 00:35:02.981680 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-05 00:35:02.981698 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-05 00:35:02.981717 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:02.981747 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-05 00:35:02.981761 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-05 00:35:02.981771 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-05 00:35:02.981782 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-05 00:35:02.981793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:02.981811 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-05 00:35:20.913520 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-05 00:35:20.913629 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:20.913645 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-05 00:35:20.913656 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-05 00:35:20.913667 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:20.913678 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-05 00:35:20.913689 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-05 00:35:20.913700 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:20.913711 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:20.913722 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-05 00:35:20.913733 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-05 00:35:20.913743 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:20.913754 | orchestrator | 2026-04-05 00:35:20.913767 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-05 00:35:20.913787 | orchestrator | Sunday 05 April 2026 00:35:03 +0000 (0:00:00.503) 0:06:38.458 ********** 2026-04-05 00:35:20.913806 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:20.913823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:20.913842 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:20.913862 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:20.913883 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:20.913899 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:20.913910 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:20.913921 | orchestrator | 2026-04-05 00:35:20.913934 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-05 00:35:20.913952 | orchestrator | Sunday 05 April 2026 00:35:03 +0000 (0:00:00.452) 0:06:38.910 ********** 2026-04-05 00:35:20.913970 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:20.913988 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:20.914007 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:20.914106 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:20.914121 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:20.914134 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:20.914146 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:20.914159 | orchestrator | 2026-04-05 00:35:20.914171 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-05 00:35:20.914184 | orchestrator | Sunday 05 April 2026 00:35:04 +0000 (0:00:00.581) 0:06:39.492 ********** 2026-04-05 00:35:20.914197 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:20.914210 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:20.914223 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:20.914236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:20.914249 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:20.914262 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:20.914276 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:20.914289 | orchestrator | 2026-04-05 00:35:20.914302 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-05 00:35:20.914315 | orchestrator | Sunday 05 April 2026 00:35:04 +0000 (0:00:00.505) 0:06:39.997 ********** 2026-04-05 00:35:20.914328 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.914341 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.914385 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.914399 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.914412 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.914423 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.914434 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.914445 | orchestrator | 2026-04-05 00:35:20.914456 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-05 00:35:20.914467 | orchestrator | Sunday 05 April 2026 00:35:06 +0000 (0:00:01.554) 0:06:41.552 ********** 2026-04-05 00:35:20.914491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:20.914505 | orchestrator | 2026-04-05 00:35:20.914516 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-05 00:35:20.914527 | orchestrator | Sunday 05 April 2026 00:35:07 +0000 (0:00:00.805) 0:06:42.358 ********** 2026-04-05 00:35:20.914538 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.914549 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:20.914559 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:20.914570 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:20.914581 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:20.914591 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:20.914602 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:20.914613 | orchestrator | 2026-04-05 00:35:20.914624 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-05 00:35:20.914635 | orchestrator | Sunday 05 April 2026 00:35:08 +0000 (0:00:00.887) 0:06:43.246 ********** 2026-04-05 00:35:20.914646 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.914656 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:20.914667 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:20.914678 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:20.914689 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:20.914699 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:20.914710 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:20.914721 | orchestrator | 2026-04-05 00:35:20.914732 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-05 00:35:20.914743 | orchestrator | Sunday 05 April 2026 00:35:08 +0000 (0:00:00.748) 0:06:43.995 ********** 2026-04-05 00:35:20.914754 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.914764 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:20.914775 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:20.914786 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:20.914798 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:20.914808 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:20.914819 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:20.914830 | orchestrator | 2026-04-05 00:35:20.914841 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-05 00:35:20.914873 | orchestrator | Sunday 05 April 2026 00:35:09 +0000 (0:00:01.127) 0:06:45.123 ********** 2026-04-05 00:35:20.914885 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:20.914896 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.914906 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.914917 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.914928 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.914939 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.914949 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.914960 | orchestrator | 2026-04-05 00:35:20.914971 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-05 00:35:20.914982 | orchestrator | Sunday 05 April 2026 00:35:11 +0000 (0:00:01.308) 0:06:46.431 ********** 2026-04-05 00:35:20.914993 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.915003 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:20.915014 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:20.915033 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:20.915044 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:20.915055 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:20.915094 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:20.915113 | orchestrator | 2026-04-05 00:35:20.915130 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-05 00:35:20.915148 | orchestrator | Sunday 05 April 2026 00:35:12 +0000 (0:00:01.241) 0:06:47.672 ********** 2026-04-05 00:35:20.915165 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:20.915183 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:20.915200 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:20.915216 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:20.915226 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:20.915237 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:20.915248 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:20.915258 | orchestrator | 2026-04-05 00:35:20.915269 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-05 00:35:20.915280 | orchestrator | Sunday 05 April 2026 00:35:13 +0000 (0:00:01.253) 0:06:48.926 ********** 2026-04-05 00:35:20.915291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:20.915302 | orchestrator | 2026-04-05 00:35:20.915313 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-05 00:35:20.915324 | orchestrator | Sunday 05 April 2026 00:35:14 +0000 (0:00:00.954) 0:06:49.880 ********** 2026-04-05 00:35:20.915335 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.915345 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.915356 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.915367 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.915377 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.915388 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.915399 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.915410 | orchestrator | 2026-04-05 00:35:20.915420 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-05 00:35:20.915431 | orchestrator | Sunday 05 April 2026 00:35:16 +0000 (0:00:01.367) 0:06:51.247 ********** 2026-04-05 00:35:20.915442 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.915453 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.915464 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.915474 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.915485 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.915495 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.915506 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.915516 | orchestrator | 2026-04-05 00:35:20.915527 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-05 00:35:20.915538 | orchestrator | Sunday 05 April 2026 00:35:17 +0000 (0:00:01.343) 0:06:52.591 ********** 2026-04-05 00:35:20.915549 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.915559 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.915570 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.915580 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.915591 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.915608 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.915618 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.915630 | orchestrator | 2026-04-05 00:35:20.915650 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-05 00:35:20.915669 | orchestrator | Sunday 05 April 2026 00:35:18 +0000 (0:00:01.125) 0:06:53.716 ********** 2026-04-05 00:35:20.915685 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:20.915703 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:20.915719 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:20.915736 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:20.915752 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:20.915783 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:20.915803 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:20.915820 | orchestrator | 2026-04-05 00:35:20.915837 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-05 00:35:20.915848 | orchestrator | Sunday 05 April 2026 00:35:19 +0000 (0:00:01.157) 0:06:54.874 ********** 2026-04-05 00:35:20.915859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:20.915870 | orchestrator | 2026-04-05 00:35:20.915881 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:20.915892 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.946) 0:06:55.820 ********** 2026-04-05 00:35:20.915903 | orchestrator | 2026-04-05 00:35:20.915914 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:20.915924 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.211) 0:06:56.032 ********** 2026-04-05 00:35:20.915935 | orchestrator | 2026-04-05 00:35:20.915946 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:20.915957 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.040) 0:06:56.073 ********** 2026-04-05 00:35:20.915967 | orchestrator | 2026-04-05 00:35:20.915978 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:20.915999 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.042) 0:06:56.116 ********** 2026-04-05 00:35:47.758530 | orchestrator | 2026-04-05 00:35:47.758630 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:47.758644 | orchestrator | Sunday 05 April 2026 00:35:20 +0000 (0:00:00.049) 0:06:56.165 ********** 2026-04-05 00:35:47.758652 | orchestrator | 2026-04-05 00:35:47.758660 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:47.758668 | orchestrator | Sunday 05 April 2026 00:35:21 +0000 (0:00:00.056) 0:06:56.222 ********** 2026-04-05 00:35:47.758675 | orchestrator | 2026-04-05 00:35:47.758682 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:35:47.758690 | orchestrator | Sunday 05 April 2026 00:35:21 +0000 (0:00:00.040) 0:06:56.263 ********** 2026-04-05 00:35:47.758697 | orchestrator | 2026-04-05 00:35:47.758704 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:35:47.758711 | orchestrator | Sunday 05 April 2026 00:35:21 +0000 (0:00:00.046) 0:06:56.310 ********** 2026-04-05 00:35:47.758718 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:47.758727 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:47.758734 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:47.758741 | orchestrator | 2026-04-05 00:35:47.758749 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-05 00:35:47.758756 | orchestrator | Sunday 05 April 2026 00:35:22 +0000 (0:00:01.188) 0:06:57.498 ********** 2026-04-05 00:35:47.758763 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:47.758771 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:47.758778 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:47.758785 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:47.758792 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:47.758799 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:47.758806 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:47.758813 | orchestrator | 2026-04-05 00:35:47.758820 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-05 00:35:47.758828 | orchestrator | Sunday 05 April 2026 00:35:23 +0000 (0:00:01.437) 0:06:58.936 ********** 2026-04-05 00:35:47.758835 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:47.758842 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:47.758850 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:47.758857 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:47.758864 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:47.758899 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:47.758912 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:47.758925 | orchestrator | 2026-04-05 00:35:47.758938 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-05 00:35:47.758950 | orchestrator | Sunday 05 April 2026 00:35:24 +0000 (0:00:01.219) 0:07:00.155 ********** 2026-04-05 00:35:47.758963 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:47.758975 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:47.758987 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:47.758999 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:47.759007 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:47.759015 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:47.759022 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:47.759035 | orchestrator | 2026-04-05 00:35:47.759047 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-05 00:35:47.759059 | orchestrator | Sunday 05 April 2026 00:35:27 +0000 (0:00:02.330) 0:07:02.485 ********** 2026-04-05 00:35:47.759123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:47.759137 | orchestrator | 2026-04-05 00:35:47.759152 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-05 00:35:47.759170 | orchestrator | Sunday 05 April 2026 00:35:27 +0000 (0:00:00.108) 0:07:02.594 ********** 2026-04-05 00:35:47.759183 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.759197 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:47.759212 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:47.759225 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:47.759234 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:47.759243 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:47.759251 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:47.759260 | orchestrator | 2026-04-05 00:35:47.759269 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-05 00:35:47.759278 | orchestrator | Sunday 05 April 2026 00:35:28 +0000 (0:00:01.232) 0:07:03.827 ********** 2026-04-05 00:35:47.759287 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:47.759296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:47.759304 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:47.759312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:47.759321 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:47.759330 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:47.759343 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:47.759357 | orchestrator | 2026-04-05 00:35:47.759369 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-05 00:35:47.759382 | orchestrator | Sunday 05 April 2026 00:35:29 +0000 (0:00:00.538) 0:07:04.366 ********** 2026-04-05 00:35:47.759393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:47.759404 | orchestrator | 2026-04-05 00:35:47.759414 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-05 00:35:47.759423 | orchestrator | Sunday 05 April 2026 00:35:30 +0000 (0:00:00.948) 0:07:05.315 ********** 2026-04-05 00:35:47.759431 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.759440 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:47.759448 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:47.759456 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:47.759463 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:47.759470 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:47.759477 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:47.759484 | orchestrator | 2026-04-05 00:35:47.759491 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-05 00:35:47.759498 | orchestrator | Sunday 05 April 2026 00:35:31 +0000 (0:00:01.095) 0:07:06.410 ********** 2026-04-05 00:35:47.759506 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-05 00:35:47.759539 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-05 00:35:47.759547 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-05 00:35:47.759555 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-05 00:35:47.759562 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-05 00:35:47.759569 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-05 00:35:47.759592 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-05 00:35:47.759604 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-05 00:35:47.759616 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-05 00:35:47.759628 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-05 00:35:47.759639 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-05 00:35:47.759652 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-05 00:35:47.759664 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-05 00:35:47.759677 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-05 00:35:47.759688 | orchestrator | 2026-04-05 00:35:47.759701 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-05 00:35:47.759713 | orchestrator | Sunday 05 April 2026 00:35:33 +0000 (0:00:02.533) 0:07:08.944 ********** 2026-04-05 00:35:47.759726 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:47.759739 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:47.759751 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:47.759763 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:47.759841 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:47.759857 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:47.759868 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:47.759880 | orchestrator | 2026-04-05 00:35:47.759894 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-05 00:35:47.759907 | orchestrator | Sunday 05 April 2026 00:35:34 +0000 (0:00:00.533) 0:07:09.478 ********** 2026-04-05 00:35:47.759921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:47.759936 | orchestrator | 2026-04-05 00:35:47.759947 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-05 00:35:47.759961 | orchestrator | Sunday 05 April 2026 00:35:35 +0000 (0:00:01.088) 0:07:10.566 ********** 2026-04-05 00:35:47.759974 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.759987 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:47.760001 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:47.760014 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:47.760026 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:47.760038 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:47.760051 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:47.760064 | orchestrator | 2026-04-05 00:35:47.760098 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-05 00:35:47.760111 | orchestrator | Sunday 05 April 2026 00:35:36 +0000 (0:00:00.836) 0:07:11.403 ********** 2026-04-05 00:35:47.760123 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.760135 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:47.760146 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:47.760153 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:47.760160 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:47.760167 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:47.760175 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:47.760183 | orchestrator | 2026-04-05 00:35:47.760196 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-05 00:35:47.760209 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.900) 0:07:12.303 ********** 2026-04-05 00:35:47.760233 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:47.760240 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:47.760248 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:47.760255 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:47.760262 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:47.760269 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:47.760278 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:47.760290 | orchestrator | 2026-04-05 00:35:47.760301 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-05 00:35:47.760312 | orchestrator | Sunday 05 April 2026 00:35:37 +0000 (0:00:00.552) 0:07:12.855 ********** 2026-04-05 00:35:47.760324 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.760335 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:47.760348 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:47.760359 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:47.760371 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:47.760382 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:47.760395 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:47.760408 | orchestrator | 2026-04-05 00:35:47.760419 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-05 00:35:47.760430 | orchestrator | Sunday 05 April 2026 00:35:39 +0000 (0:00:01.528) 0:07:14.384 ********** 2026-04-05 00:35:47.760440 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:47.760452 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:47.760465 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:47.760476 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:47.760488 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:47.760499 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:47.760511 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:47.760523 | orchestrator | 2026-04-05 00:35:47.760535 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-05 00:35:47.760548 | orchestrator | Sunday 05 April 2026 00:35:39 +0000 (0:00:00.808) 0:07:15.193 ********** 2026-04-05 00:35:47.760561 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:47.760573 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:47.760585 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:47.760597 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:47.760609 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:47.760621 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:47.760647 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:21.588444 | orchestrator | 2026-04-05 00:36:21.588590 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-05 00:36:21.588608 | orchestrator | Sunday 05 April 2026 00:35:47 +0000 (0:00:07.857) 0:07:23.050 ********** 2026-04-05 00:36:21.588620 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.588632 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:21.588645 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:21.588656 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:21.588667 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:21.588678 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:21.588689 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:21.588700 | orchestrator | 2026-04-05 00:36:21.588712 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-05 00:36:21.588723 | orchestrator | Sunday 05 April 2026 00:35:49 +0000 (0:00:01.364) 0:07:24.415 ********** 2026-04-05 00:36:21.588734 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.588745 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:21.588756 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:21.588767 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:21.588777 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:21.588788 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:21.588799 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:21.588810 | orchestrator | 2026-04-05 00:36:21.588821 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-05 00:36:21.588862 | orchestrator | Sunday 05 April 2026 00:35:51 +0000 (0:00:01.805) 0:07:26.221 ********** 2026-04-05 00:36:21.588873 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.588884 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:21.588895 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:21.588906 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:21.588916 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:21.588927 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:21.588941 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:21.588953 | orchestrator | 2026-04-05 00:36:21.588967 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:36:21.588980 | orchestrator | Sunday 05 April 2026 00:35:53 +0000 (0:00:02.045) 0:07:28.266 ********** 2026-04-05 00:36:21.588993 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.589005 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.589019 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.589033 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.589046 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.589058 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.589070 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.589105 | orchestrator | 2026-04-05 00:36:21.589118 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:36:21.589131 | orchestrator | Sunday 05 April 2026 00:35:53 +0000 (0:00:00.859) 0:07:29.126 ********** 2026-04-05 00:36:21.589144 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:21.589155 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:21.589166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:21.589177 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:21.589187 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:21.589198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:21.589209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:21.589220 | orchestrator | 2026-04-05 00:36:21.589231 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-05 00:36:21.589241 | orchestrator | Sunday 05 April 2026 00:35:54 +0000 (0:00:00.919) 0:07:30.045 ********** 2026-04-05 00:36:21.589252 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:21.589263 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:21.589274 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:21.589284 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:21.589295 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:21.589306 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:21.589316 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:21.589327 | orchestrator | 2026-04-05 00:36:21.589338 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-05 00:36:21.589366 | orchestrator | Sunday 05 April 2026 00:35:55 +0000 (0:00:00.693) 0:07:30.739 ********** 2026-04-05 00:36:21.589378 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.589389 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.589400 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.589410 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.589421 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.589432 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.589443 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.589454 | orchestrator | 2026-04-05 00:36:21.589465 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-05 00:36:21.589476 | orchestrator | Sunday 05 April 2026 00:35:56 +0000 (0:00:00.532) 0:07:31.272 ********** 2026-04-05 00:36:21.589487 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.589497 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.589508 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.589519 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.589529 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.589540 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.589550 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.589570 | orchestrator | 2026-04-05 00:36:21.589581 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-05 00:36:21.589592 | orchestrator | Sunday 05 April 2026 00:35:56 +0000 (0:00:00.557) 0:07:31.829 ********** 2026-04-05 00:36:21.589603 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.589614 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.589624 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.589635 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.589645 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.589656 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.589667 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.589678 | orchestrator | 2026-04-05 00:36:21.589688 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-05 00:36:21.589699 | orchestrator | Sunday 05 April 2026 00:35:57 +0000 (0:00:00.525) 0:07:32.355 ********** 2026-04-05 00:36:21.589710 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.589721 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.589731 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.589742 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.589753 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.589763 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.589774 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.589784 | orchestrator | 2026-04-05 00:36:21.589815 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-05 00:36:21.589827 | orchestrator | Sunday 05 April 2026 00:36:02 +0000 (0:00:05.623) 0:07:37.979 ********** 2026-04-05 00:36:21.589837 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:21.589848 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:21.589859 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:21.589870 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:21.589880 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:21.589891 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:21.589902 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:21.589912 | orchestrator | 2026-04-05 00:36:21.589923 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-05 00:36:21.589934 | orchestrator | Sunday 05 April 2026 00:36:03 +0000 (0:00:00.805) 0:07:38.785 ********** 2026-04-05 00:36:21.589948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:21.589962 | orchestrator | 2026-04-05 00:36:21.589973 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-05 00:36:21.589983 | orchestrator | Sunday 05 April 2026 00:36:04 +0000 (0:00:00.870) 0:07:39.655 ********** 2026-04-05 00:36:21.589994 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.590005 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.590119 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.590135 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.590146 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.590157 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.590167 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.590178 | orchestrator | 2026-04-05 00:36:21.590189 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-05 00:36:21.590200 | orchestrator | Sunday 05 April 2026 00:36:06 +0000 (0:00:02.228) 0:07:41.884 ********** 2026-04-05 00:36:21.590210 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.590221 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.590231 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.590242 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.590252 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.590263 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.590273 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.590284 | orchestrator | 2026-04-05 00:36:21.590294 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-05 00:36:21.590314 | orchestrator | Sunday 05 April 2026 00:36:08 +0000 (0:00:01.352) 0:07:43.237 ********** 2026-04-05 00:36:21.590325 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:21.590335 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:21.590345 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:21.590356 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:21.590366 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:21.590377 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:21.590388 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:21.590398 | orchestrator | 2026-04-05 00:36:21.590409 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-05 00:36:21.590420 | orchestrator | Sunday 05 April 2026 00:36:08 +0000 (0:00:00.884) 0:07:44.121 ********** 2026-04-05 00:36:21.590431 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590445 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590456 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590473 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590484 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590494 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590505 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:36:21.590516 | orchestrator | 2026-04-05 00:36:21.590527 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-05 00:36:21.590537 | orchestrator | Sunday 05 April 2026 00:36:10 +0000 (0:00:01.772) 0:07:45.894 ********** 2026-04-05 00:36:21.590549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:21.590560 | orchestrator | 2026-04-05 00:36:21.590570 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-05 00:36:21.590581 | orchestrator | Sunday 05 April 2026 00:36:11 +0000 (0:00:01.078) 0:07:46.973 ********** 2026-04-05 00:36:21.590592 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:21.590603 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:21.590613 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:21.590624 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:21.590635 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:21.590645 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:21.590656 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:21.590667 | orchestrator | 2026-04-05 00:36:21.590686 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-05 00:36:53.150395 | orchestrator | Sunday 05 April 2026 00:36:21 +0000 (0:00:09.816) 0:07:56.790 ********** 2026-04-05 00:36:53.150489 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:53.150499 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:53.150506 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:53.150512 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:53.150519 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:53.150527 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:53.150534 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:53.150541 | orchestrator | 2026-04-05 00:36:53.150550 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-05 00:36:53.150578 | orchestrator | Sunday 05 April 2026 00:36:23 +0000 (0:00:01.804) 0:07:58.595 ********** 2026-04-05 00:36:53.150586 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:53.150593 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:53.150600 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:53.150607 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:53.150614 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:53.150621 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:53.150629 | orchestrator | 2026-04-05 00:36:53.150636 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-05 00:36:53.150643 | orchestrator | Sunday 05 April 2026 00:36:24 +0000 (0:00:01.508) 0:08:00.103 ********** 2026-04-05 00:36:53.150651 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.150659 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.150666 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.150673 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.150680 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.150687 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.150694 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.150701 | orchestrator | 2026-04-05 00:36:53.150709 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-05 00:36:53.150716 | orchestrator | 2026-04-05 00:36:53.150723 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-05 00:36:53.150730 | orchestrator | Sunday 05 April 2026 00:36:26 +0000 (0:00:01.266) 0:08:01.370 ********** 2026-04-05 00:36:53.150737 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:53.150745 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:53.150752 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:53.150760 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:53.150767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:53.150774 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:53.150781 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:53.150788 | orchestrator | 2026-04-05 00:36:53.150795 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-05 00:36:53.150802 | orchestrator | 2026-04-05 00:36:53.150810 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-05 00:36:53.150817 | orchestrator | Sunday 05 April 2026 00:36:26 +0000 (0:00:00.603) 0:08:01.974 ********** 2026-04-05 00:36:53.150824 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.150831 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.150838 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.150846 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.150853 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.150860 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.150867 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.150874 | orchestrator | 2026-04-05 00:36:53.150881 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-05 00:36:53.150888 | orchestrator | Sunday 05 April 2026 00:36:28 +0000 (0:00:01.366) 0:08:03.340 ********** 2026-04-05 00:36:53.150896 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:53.150903 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:53.150910 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:53.150917 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:53.150924 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:53.150931 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:53.150938 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:53.150945 | orchestrator | 2026-04-05 00:36:53.150954 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-05 00:36:53.150973 | orchestrator | Sunday 05 April 2026 00:36:29 +0000 (0:00:01.720) 0:08:05.061 ********** 2026-04-05 00:36:53.150982 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:53.150991 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:53.151000 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:53.151008 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:53.151022 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:53.151030 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:53.151039 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:53.151047 | orchestrator | 2026-04-05 00:36:53.151055 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-05 00:36:53.151063 | orchestrator | Sunday 05 April 2026 00:36:30 +0000 (0:00:00.492) 0:08:05.553 ********** 2026-04-05 00:36:53.151071 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:53.151081 | orchestrator | 2026-04-05 00:36:53.151090 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-05 00:36:53.151098 | orchestrator | Sunday 05 April 2026 00:36:31 +0000 (0:00:00.844) 0:08:06.397 ********** 2026-04-05 00:36:53.151109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:53.151144 | orchestrator | 2026-04-05 00:36:53.151151 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-05 00:36:53.151158 | orchestrator | Sunday 05 April 2026 00:36:32 +0000 (0:00:00.990) 0:08:07.388 ********** 2026-04-05 00:36:53.151166 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151175 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151183 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151191 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151199 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151208 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151216 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151224 | orchestrator | 2026-04-05 00:36:53.151248 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-05 00:36:53.151257 | orchestrator | Sunday 05 April 2026 00:36:41 +0000 (0:00:09.211) 0:08:16.599 ********** 2026-04-05 00:36:53.151266 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151274 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151282 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151290 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151298 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151307 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151314 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151323 | orchestrator | 2026-04-05 00:36:53.151331 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-05 00:36:53.151338 | orchestrator | Sunday 05 April 2026 00:36:42 +0000 (0:00:00.836) 0:08:17.436 ********** 2026-04-05 00:36:53.151345 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151352 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151359 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151366 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151373 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151380 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151387 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151394 | orchestrator | 2026-04-05 00:36:53.151401 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-05 00:36:53.151409 | orchestrator | Sunday 05 April 2026 00:36:43 +0000 (0:00:01.424) 0:08:18.860 ********** 2026-04-05 00:36:53.151416 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151423 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151430 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151437 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151444 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151451 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151458 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151465 | orchestrator | 2026-04-05 00:36:53.151472 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-05 00:36:53.151485 | orchestrator | Sunday 05 April 2026 00:36:45 +0000 (0:00:02.036) 0:08:20.897 ********** 2026-04-05 00:36:53.151492 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151497 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151503 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151510 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151517 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151524 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151531 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151539 | orchestrator | 2026-04-05 00:36:53.151546 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-05 00:36:53.151553 | orchestrator | Sunday 05 April 2026 00:36:47 +0000 (0:00:01.347) 0:08:22.245 ********** 2026-04-05 00:36:53.151560 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151567 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151574 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151581 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151588 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151596 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151602 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151610 | orchestrator | 2026-04-05 00:36:53.151617 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-05 00:36:53.151624 | orchestrator | 2026-04-05 00:36:53.151631 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-05 00:36:53.151638 | orchestrator | Sunday 05 April 2026 00:36:48 +0000 (0:00:01.152) 0:08:23.398 ********** 2026-04-05 00:36:53.151646 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:53.151653 | orchestrator | 2026-04-05 00:36:53.151660 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:36:53.151671 | orchestrator | Sunday 05 April 2026 00:36:49 +0000 (0:00:01.013) 0:08:24.411 ********** 2026-04-05 00:36:53.151679 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:53.151686 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:53.151693 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:53.151700 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:53.151707 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:53.151714 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:53.151722 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:53.151729 | orchestrator | 2026-04-05 00:36:53.151736 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:36:53.151743 | orchestrator | Sunday 05 April 2026 00:36:50 +0000 (0:00:00.847) 0:08:25.259 ********** 2026-04-05 00:36:53.151750 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:53.151757 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:53.151764 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:53.151771 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:53.151778 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:53.151785 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:53.151793 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:53.151799 | orchestrator | 2026-04-05 00:36:53.151806 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-05 00:36:53.151814 | orchestrator | Sunday 05 April 2026 00:36:51 +0000 (0:00:01.318) 0:08:26.577 ********** 2026-04-05 00:36:53.151821 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:53.151828 | orchestrator | 2026-04-05 00:36:53.151835 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:36:53.151842 | orchestrator | Sunday 05 April 2026 00:36:52 +0000 (0:00:00.893) 0:08:27.470 ********** 2026-04-05 00:36:53.151849 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:53.151856 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:53.151868 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:53.151875 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:53.151882 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:53.151889 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:53.151896 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:53.151903 | orchestrator | 2026-04-05 00:36:53.151914 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:36:54.782849 | orchestrator | Sunday 05 April 2026 00:36:53 +0000 (0:00:00.884) 0:08:28.354 ********** 2026-04-05 00:36:54.782955 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:54.782971 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:54.782983 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:54.782994 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:54.783005 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:54.783016 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:54.783027 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:54.783038 | orchestrator | 2026-04-05 00:36:54.783050 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:36:54.783062 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 00:36:54.783074 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:36:54.783085 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:36:54.783096 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:36:54.783106 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:36:54.783149 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:36:54.783162 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:36:54.783173 | orchestrator | 2026-04-05 00:36:54.783184 | orchestrator | 2026-04-05 00:36:54.783194 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:36:54.783205 | orchestrator | Sunday 05 April 2026 00:36:54 +0000 (0:00:01.267) 0:08:29.621 ********** 2026-04-05 00:36:54.783216 | orchestrator | =============================================================================== 2026-04-05 00:36:54.783227 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.37s 2026-04-05 00:36:54.783238 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.62s 2026-04-05 00:36:54.783249 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.48s 2026-04-05 00:36:54.783259 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.69s 2026-04-05 00:36:54.783270 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.67s 2026-04-05 00:36:54.783280 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.11s 2026-04-05 00:36:54.783292 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.24s 2026-04-05 00:36:54.783302 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.54s 2026-04-05 00:36:54.783313 | orchestrator | osism.services.rng : Install rng package ------------------------------- 10.13s 2026-04-05 00:36:54.783324 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.82s 2026-04-05 00:36:54.783335 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.38s 2026-04-05 00:36:54.783374 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.21s 2026-04-05 00:36:54.783387 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.64s 2026-04-05 00:36:54.783399 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.51s 2026-04-05 00:36:54.783412 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.26s 2026-04-05 00:36:54.783425 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.90s 2026-04-05 00:36:54.783457 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.86s 2026-04-05 00:36:54.783469 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.95s 2026-04-05 00:36:54.783483 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.64s 2026-04-05 00:36:54.783495 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.91s 2026-04-05 00:36:54.988226 | orchestrator | + osism apply fail2ban 2026-04-05 00:37:06.847175 | orchestrator | 2026-04-05 00:37:06 | INFO  | Prepare task for execution of fail2ban. 2026-04-05 00:37:06.948873 | orchestrator | 2026-04-05 00:37:06 | INFO  | Task bd6906ac-fe65-42c3-acbb-f0c9a1f3bf68 (fail2ban) was prepared for execution. 2026-04-05 00:37:06.948988 | orchestrator | 2026-04-05 00:37:06 | INFO  | It takes a moment until task bd6906ac-fe65-42c3-acbb-f0c9a1f3bf68 (fail2ban) has been started and output is visible here. 2026-04-05 00:37:28.484993 | orchestrator | 2026-04-05 00:37:28.485071 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-05 00:37:28.485079 | orchestrator | 2026-04-05 00:37:28.485085 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-05 00:37:28.485091 | orchestrator | Sunday 05 April 2026 00:37:10 +0000 (0:00:00.362) 0:00:00.362 ********** 2026-04-05 00:37:28.485097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:28.485105 | orchestrator | 2026-04-05 00:37:28.485109 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-05 00:37:28.485115 | orchestrator | Sunday 05 April 2026 00:37:11 +0000 (0:00:01.228) 0:00:01.591 ********** 2026-04-05 00:37:28.485120 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:28.485126 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:28.485131 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:28.485135 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:28.485140 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:28.485144 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:28.485149 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:28.485188 | orchestrator | 2026-04-05 00:37:28.485194 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-05 00:37:28.485199 | orchestrator | Sunday 05 April 2026 00:37:23 +0000 (0:00:11.416) 0:00:13.007 ********** 2026-04-05 00:37:28.485203 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:28.485208 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:28.485212 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:28.485217 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:28.485221 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:28.485226 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:28.485231 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:28.485235 | orchestrator | 2026-04-05 00:37:28.485240 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-05 00:37:28.485244 | orchestrator | Sunday 05 April 2026 00:37:25 +0000 (0:00:01.781) 0:00:14.788 ********** 2026-04-05 00:37:28.485249 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:28.485254 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:28.485259 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:28.485283 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:28.485288 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:28.485292 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:28.485297 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:28.485301 | orchestrator | 2026-04-05 00:37:28.485306 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-05 00:37:28.485311 | orchestrator | Sunday 05 April 2026 00:37:26 +0000 (0:00:01.270) 0:00:16.059 ********** 2026-04-05 00:37:28.485316 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:28.485320 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:37:28.485325 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:37:28.485329 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:37:28.485334 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:37:28.485338 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:37:28.485343 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:37:28.485347 | orchestrator | 2026-04-05 00:37:28.485352 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:37:28.485357 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485362 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485367 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485372 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485386 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485390 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485395 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:28.485399 | orchestrator | 2026-04-05 00:37:28.485404 | orchestrator | 2026-04-05 00:37:28.485409 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:37:28.485413 | orchestrator | Sunday 05 April 2026 00:37:28 +0000 (0:00:01.694) 0:00:17.753 ********** 2026-04-05 00:37:28.485418 | orchestrator | =============================================================================== 2026-04-05 00:37:28.485423 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.42s 2026-04-05 00:37:28.485427 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.78s 2026-04-05 00:37:28.485432 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.69s 2026-04-05 00:37:28.485436 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.27s 2026-04-05 00:37:28.485441 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-04-05 00:37:28.687836 | orchestrator | + osism apply network 2026-04-05 00:37:40.028987 | orchestrator | 2026-04-05 00:37:40 | INFO  | Prepare task for execution of network. 2026-04-05 00:37:40.111365 | orchestrator | 2026-04-05 00:37:40 | INFO  | Task 8386ed83-9c09-4781-9166-a0341f3db841 (network) was prepared for execution. 2026-04-05 00:37:40.112016 | orchestrator | 2026-04-05 00:37:40 | INFO  | It takes a moment until task 8386ed83-9c09-4781-9166-a0341f3db841 (network) has been started and output is visible here. 2026-04-05 00:38:08.979345 | orchestrator | 2026-04-05 00:38:08.979453 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-05 00:38:08.979464 | orchestrator | 2026-04-05 00:38:08.979472 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-05 00:38:08.979501 | orchestrator | Sunday 05 April 2026 00:37:43 +0000 (0:00:00.348) 0:00:00.348 ********** 2026-04-05 00:38:08.979509 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.979517 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.979522 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.979529 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.979535 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.979542 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.979548 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.979555 | orchestrator | 2026-04-05 00:38:08.979561 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-05 00:38:08.979567 | orchestrator | Sunday 05 April 2026 00:37:44 +0000 (0:00:00.659) 0:00:01.008 ********** 2026-04-05 00:38:08.979576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:08.979586 | orchestrator | 2026-04-05 00:38:08.979593 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-05 00:38:08.979599 | orchestrator | Sunday 05 April 2026 00:37:45 +0000 (0:00:01.188) 0:00:02.197 ********** 2026-04-05 00:38:08.979606 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.979612 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.979619 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.979625 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.979632 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.979639 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.979646 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.979653 | orchestrator | 2026-04-05 00:38:08.979658 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-05 00:38:08.979665 | orchestrator | Sunday 05 April 2026 00:37:47 +0000 (0:00:02.498) 0:00:04.695 ********** 2026-04-05 00:38:08.979671 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.979677 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.979683 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.979690 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.979696 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.979702 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.979708 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.979715 | orchestrator | 2026-04-05 00:38:08.979721 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-05 00:38:08.979727 | orchestrator | Sunday 05 April 2026 00:37:49 +0000 (0:00:01.629) 0:00:06.325 ********** 2026-04-05 00:38:08.979734 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-05 00:38:08.979741 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-05 00:38:08.979748 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-05 00:38:08.979754 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-05 00:38:08.979760 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-05 00:38:08.979766 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-05 00:38:08.979772 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-05 00:38:08.979779 | orchestrator | 2026-04-05 00:38:08.979786 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-05 00:38:08.979792 | orchestrator | Sunday 05 April 2026 00:37:50 +0000 (0:00:01.198) 0:00:07.523 ********** 2026-04-05 00:38:08.979798 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:38:08.979805 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:38:08.979811 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:38:08.979818 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:38:08.979838 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:38:08.979846 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:38:08.979852 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:38:08.979859 | orchestrator | 2026-04-05 00:38:08.979874 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-05 00:38:08.979881 | orchestrator | Sunday 05 April 2026 00:37:54 +0000 (0:00:03.490) 0:00:11.014 ********** 2026-04-05 00:38:08.979889 | orchestrator | changed: [testbed-manager] 2026-04-05 00:38:08.979896 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:08.979904 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:08.979910 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:08.979918 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:08.979924 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:08.979930 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:08.979935 | orchestrator | 2026-04-05 00:38:08.979941 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-05 00:38:08.979948 | orchestrator | Sunday 05 April 2026 00:37:55 +0000 (0:00:01.682) 0:00:12.697 ********** 2026-04-05 00:38:08.979956 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:38:08.979963 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:38:08.979970 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:38:08.979978 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:38:08.979985 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:38:08.979993 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:38:08.980001 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:38:08.980008 | orchestrator | 2026-04-05 00:38:08.980016 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-05 00:38:08.980023 | orchestrator | Sunday 05 April 2026 00:37:58 +0000 (0:00:02.106) 0:00:14.803 ********** 2026-04-05 00:38:08.980031 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.980038 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.980046 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.980053 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.980061 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.980068 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.980075 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.980082 | orchestrator | 2026-04-05 00:38:08.980088 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-05 00:38:08.980114 | orchestrator | Sunday 05 April 2026 00:37:59 +0000 (0:00:00.978) 0:00:15.781 ********** 2026-04-05 00:38:08.980122 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:08.980128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:08.980136 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:08.980143 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:08.980150 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:08.980157 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:08.980164 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:08.980171 | orchestrator | 2026-04-05 00:38:08.980177 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-05 00:38:08.980185 | orchestrator | Sunday 05 April 2026 00:37:59 +0000 (0:00:00.819) 0:00:16.601 ********** 2026-04-05 00:38:08.980192 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.980246 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.980252 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.980258 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.980264 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.980270 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.980276 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.980282 | orchestrator | 2026-04-05 00:38:08.980289 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-05 00:38:08.980295 | orchestrator | Sunday 05 April 2026 00:38:01 +0000 (0:00:02.102) 0:00:18.703 ********** 2026-04-05 00:38:08.980301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:08.980308 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:08.980315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:08.980321 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:08.980327 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:08.980341 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:08.980350 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-05 00:38:08.980358 | orchestrator | 2026-04-05 00:38:08.980364 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-05 00:38:08.980371 | orchestrator | Sunday 05 April 2026 00:38:02 +0000 (0:00:00.990) 0:00:19.693 ********** 2026-04-05 00:38:08.980378 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.980384 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:08.980391 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:08.980398 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:08.980404 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:08.980411 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:08.980418 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:08.980424 | orchestrator | 2026-04-05 00:38:08.980431 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-05 00:38:08.980437 | orchestrator | Sunday 05 April 2026 00:38:04 +0000 (0:00:01.550) 0:00:21.244 ********** 2026-04-05 00:38:08.980444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:08.980452 | orchestrator | 2026-04-05 00:38:08.980459 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:38:08.980466 | orchestrator | Sunday 05 April 2026 00:38:05 +0000 (0:00:01.287) 0:00:22.531 ********** 2026-04-05 00:38:08.980472 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.980479 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.980486 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.980493 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.980500 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.980507 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.980513 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.980519 | orchestrator | 2026-04-05 00:38:08.980525 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-05 00:38:08.980539 | orchestrator | Sunday 05 April 2026 00:38:06 +0000 (0:00:01.199) 0:00:23.731 ********** 2026-04-05 00:38:08.980545 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:08.980552 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:08.980558 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:08.980565 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:08.980572 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:08.980579 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:08.980585 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:08.980592 | orchestrator | 2026-04-05 00:38:08.980598 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:38:08.980605 | orchestrator | Sunday 05 April 2026 00:38:07 +0000 (0:00:00.877) 0:00:24.608 ********** 2026-04-05 00:38:08.980611 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980618 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980624 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980630 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980636 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980643 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980649 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980655 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980662 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980668 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:38:08.980680 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980687 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980694 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980700 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:38:08.980707 | orchestrator | 2026-04-05 00:38:08.980723 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-05 00:38:26.966146 | orchestrator | Sunday 05 April 2026 00:38:08 +0000 (0:00:01.125) 0:00:25.734 ********** 2026-04-05 00:38:26.966284 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:26.966304 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:26.966315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:26.966326 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:26.966337 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:26.966348 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:26.966359 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:26.966370 | orchestrator | 2026-04-05 00:38:26.966381 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-05 00:38:26.966392 | orchestrator | Sunday 05 April 2026 00:38:09 +0000 (0:00:00.857) 0:00:26.591 ********** 2026-04-05 00:38:26.966405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-05 00:38:26.966418 | orchestrator | 2026-04-05 00:38:26.966429 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-05 00:38:26.966444 | orchestrator | Sunday 05 April 2026 00:38:14 +0000 (0:00:05.001) 0:00:31.592 ********** 2026-04-05 00:38:26.966470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966515 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:38:26.966534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:38:26.966606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:38:26.966700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:38:26.966721 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:38:26.966762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:38:26.966775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:38:26.966786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:38:26.966797 | orchestrator | 2026-04-05 00:38:26.966808 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-05 00:38:26.966819 | orchestrator | Sunday 05 April 2026 00:38:21 +0000 (0:00:06.249) 0:00:37.841 ********** 2026-04-05 00:38:26.966830 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:38:26.966847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966918 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:38:26.966934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:38:26.966954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:38:26.966977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:38:26.966988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:38:26.966999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:38:26.967020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:38:40.523765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:38:40.523898 | orchestrator | 2026-04-05 00:38:40.523923 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-05 00:38:40.523942 | orchestrator | Sunday 05 April 2026 00:38:27 +0000 (0:00:06.159) 0:00:44.001 ********** 2026-04-05 00:38:40.523961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:40.523978 | orchestrator | 2026-04-05 00:38:40.523995 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:38:40.524010 | orchestrator | Sunday 05 April 2026 00:38:28 +0000 (0:00:01.139) 0:00:45.141 ********** 2026-04-05 00:38:40.524025 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:40.524042 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:40.524057 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:40.524073 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:40.524089 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:40.524105 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:40.524121 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:40.524137 | orchestrator | 2026-04-05 00:38:40.524153 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:38:40.524169 | orchestrator | Sunday 05 April 2026 00:38:29 +0000 (0:00:01.010) 0:00:46.152 ********** 2026-04-05 00:38:40.524186 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524203 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524219 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524298 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524317 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524334 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524350 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524367 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524383 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.524402 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524418 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524436 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524452 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524470 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.524487 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524504 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524521 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524538 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524556 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.524571 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524587 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524604 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524620 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524636 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.524653 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524669 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524684 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524699 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524715 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.524731 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.524747 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:38:40.524763 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:38:40.524778 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:38:40.524794 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:38:40.524809 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.524825 | orchestrator | 2026-04-05 00:38:40.524841 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-05 00:38:40.524882 | orchestrator | Sunday 05 April 2026 00:38:30 +0000 (0:00:00.772) 0:00:46.924 ********** 2026-04-05 00:38:40.524900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:38:40.524916 | orchestrator | 2026-04-05 00:38:40.524933 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-05 00:38:40.524986 | orchestrator | Sunday 05 April 2026 00:38:31 +0000 (0:00:01.339) 0:00:48.264 ********** 2026-04-05 00:38:40.525005 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.525022 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.525038 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.525053 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.525069 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.525085 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.525100 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.525116 | orchestrator | 2026-04-05 00:38:40.525132 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-05 00:38:40.525146 | orchestrator | Sunday 05 April 2026 00:38:32 +0000 (0:00:00.836) 0:00:49.100 ********** 2026-04-05 00:38:40.525162 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.525179 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.525194 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.525210 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.525244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.525261 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.525275 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.525288 | orchestrator | 2026-04-05 00:38:40.525302 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-05 00:38:40.525317 | orchestrator | Sunday 05 April 2026 00:38:32 +0000 (0:00:00.655) 0:00:49.756 ********** 2026-04-05 00:38:40.525331 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.525345 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.525357 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.525369 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.525381 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.525394 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.525408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.525422 | orchestrator | 2026-04-05 00:38:40.525438 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-05 00:38:40.525453 | orchestrator | Sunday 05 April 2026 00:38:33 +0000 (0:00:00.845) 0:00:50.602 ********** 2026-04-05 00:38:40.525467 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:40.525482 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:40.525498 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:40.525513 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:40.525529 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:40.525545 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:40.525560 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:40.525573 | orchestrator | 2026-04-05 00:38:40.525588 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-05 00:38:40.525601 | orchestrator | Sunday 05 April 2026 00:38:35 +0000 (0:00:01.672) 0:00:52.275 ********** 2026-04-05 00:38:40.525615 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:40.525629 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:40.525643 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:40.525658 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:40.525673 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:40.525687 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:40.525703 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:40.525718 | orchestrator | 2026-04-05 00:38:40.525746 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-05 00:38:40.525763 | orchestrator | Sunday 05 April 2026 00:38:36 +0000 (0:00:01.248) 0:00:53.523 ********** 2026-04-05 00:38:40.525779 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:40.525794 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:40.525816 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:40.525832 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:40.525848 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:40.525863 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:40.525879 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:40.525896 | orchestrator | 2026-04-05 00:38:40.525927 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-05 00:38:40.525943 | orchestrator | Sunday 05 April 2026 00:38:39 +0000 (0:00:02.319) 0:00:55.843 ********** 2026-04-05 00:38:40.525958 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.525974 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.525990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.526005 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.526093 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.526115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.526131 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.526147 | orchestrator | 2026-04-05 00:38:40.526163 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-05 00:38:40.526179 | orchestrator | Sunday 05 April 2026 00:38:39 +0000 (0:00:00.666) 0:00:56.510 ********** 2026-04-05 00:38:40.526194 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:40.526210 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:40.526251 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:40.526267 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:40.526280 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:40.526294 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:40.526309 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:40.526325 | orchestrator | 2026-04-05 00:38:40.526341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:38:40.526358 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 00:38:40.526375 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.526409 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.836789 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.836887 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.836902 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.836915 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 00:38:40.836926 | orchestrator | 2026-04-05 00:38:40.836938 | orchestrator | 2026-04-05 00:38:40.836950 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:38:40.836962 | orchestrator | Sunday 05 April 2026 00:38:40 +0000 (0:00:00.769) 0:00:57.280 ********** 2026-04-05 00:38:40.836973 | orchestrator | =============================================================================== 2026-04-05 00:38:40.836983 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.25s 2026-04-05 00:38:40.836994 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.16s 2026-04-05 00:38:40.837005 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.00s 2026-04-05 00:38:40.837015 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2026-04-05 00:38:40.837026 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.50s 2026-04-05 00:38:40.837037 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.32s 2026-04-05 00:38:40.837048 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.11s 2026-04-05 00:38:40.837058 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.10s 2026-04-05 00:38:40.837097 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-04-05 00:38:40.837115 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.67s 2026-04-05 00:38:40.837133 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-04-05 00:38:40.837150 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.55s 2026-04-05 00:38:40.837167 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.34s 2026-04-05 00:38:40.837184 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-04-05 00:38:40.837199 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.25s 2026-04-05 00:38:40.837216 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-04-05 00:38:40.837277 | orchestrator | osism.commons.network : Create required directories --------------------- 1.20s 2026-04-05 00:38:40.837297 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2026-04-05 00:38:40.837327 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.14s 2026-04-05 00:38:40.837341 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.13s 2026-04-05 00:38:41.133178 | orchestrator | + osism apply wireguard 2026-04-05 00:38:52.503389 | orchestrator | 2026-04-05 00:38:52 | INFO  | Prepare task for execution of wireguard. 2026-04-05 00:38:52.614796 | orchestrator | 2026-04-05 00:38:52 | INFO  | Task 530575ac-ef9c-44aa-8fa3-d4f7d658f4a0 (wireguard) was prepared for execution. 2026-04-05 00:38:52.614879 | orchestrator | 2026-04-05 00:38:52 | INFO  | It takes a moment until task 530575ac-ef9c-44aa-8fa3-d4f7d658f4a0 (wireguard) has been started and output is visible here. 2026-04-05 00:39:14.266723 | orchestrator | 2026-04-05 00:39:14.266830 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-05 00:39:14.266844 | orchestrator | 2026-04-05 00:39:14.266853 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-05 00:39:14.266861 | orchestrator | Sunday 05 April 2026 00:38:56 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-04-05 00:39:14.266869 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:14.266879 | orchestrator | 2026-04-05 00:39:14.266888 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-05 00:39:14.266896 | orchestrator | Sunday 05 April 2026 00:38:58 +0000 (0:00:02.002) 0:00:02.317 ********** 2026-04-05 00:39:14.266904 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.266913 | orchestrator | 2026-04-05 00:39:14.266922 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-05 00:39:14.266931 | orchestrator | Sunday 05 April 2026 00:39:05 +0000 (0:00:07.575) 0:00:09.892 ********** 2026-04-05 00:39:14.266940 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.266949 | orchestrator | 2026-04-05 00:39:14.266957 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-05 00:39:14.266965 | orchestrator | Sunday 05 April 2026 00:39:06 +0000 (0:00:00.566) 0:00:10.459 ********** 2026-04-05 00:39:14.266974 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.266979 | orchestrator | 2026-04-05 00:39:14.266984 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-05 00:39:14.266990 | orchestrator | Sunday 05 April 2026 00:39:06 +0000 (0:00:00.499) 0:00:10.958 ********** 2026-04-05 00:39:14.266995 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:14.267001 | orchestrator | 2026-04-05 00:39:14.267007 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-05 00:39:14.267012 | orchestrator | Sunday 05 April 2026 00:39:07 +0000 (0:00:00.582) 0:00:11.540 ********** 2026-04-05 00:39:14.267021 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:14.267029 | orchestrator | 2026-04-05 00:39:14.267038 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-05 00:39:14.267075 | orchestrator | Sunday 05 April 2026 00:39:07 +0000 (0:00:00.505) 0:00:12.045 ********** 2026-04-05 00:39:14.267085 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:14.267093 | orchestrator | 2026-04-05 00:39:14.267101 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-05 00:39:14.267110 | orchestrator | Sunday 05 April 2026 00:39:08 +0000 (0:00:00.474) 0:00:12.520 ********** 2026-04-05 00:39:14.267115 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.267120 | orchestrator | 2026-04-05 00:39:14.267126 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-05 00:39:14.267131 | orchestrator | Sunday 05 April 2026 00:39:09 +0000 (0:00:01.364) 0:00:13.885 ********** 2026-04-05 00:39:14.267139 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:39:14.267148 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.267154 | orchestrator | 2026-04-05 00:39:14.267159 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-05 00:39:14.267164 | orchestrator | Sunday 05 April 2026 00:39:10 +0000 (0:00:01.026) 0:00:14.911 ********** 2026-04-05 00:39:14.267169 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.267174 | orchestrator | 2026-04-05 00:39:14.267179 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-05 00:39:14.267188 | orchestrator | Sunday 05 April 2026 00:39:12 +0000 (0:00:02.253) 0:00:17.165 ********** 2026-04-05 00:39:14.267197 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:14.267205 | orchestrator | 2026-04-05 00:39:14.267213 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:39:14.267222 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:14.267231 | orchestrator | 2026-04-05 00:39:14.267240 | orchestrator | 2026-04-05 00:39:14.267249 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:39:14.267285 | orchestrator | Sunday 05 April 2026 00:39:13 +0000 (0:00:00.978) 0:00:18.143 ********** 2026-04-05 00:39:14.267296 | orchestrator | =============================================================================== 2026-04-05 00:39:14.267304 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.58s 2026-04-05 00:39:14.267310 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.25s 2026-04-05 00:39:14.267316 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 2.00s 2026-04-05 00:39:14.267323 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.36s 2026-04-05 00:39:14.267329 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.03s 2026-04-05 00:39:14.267335 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2026-04-05 00:39:14.267341 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.58s 2026-04-05 00:39:14.267347 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-04-05 00:39:14.267365 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2026-04-05 00:39:14.267371 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-04-05 00:39:14.267378 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2026-04-05 00:39:14.542347 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-05 00:39:14.583544 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-05 00:39:14.583633 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-05 00:39:14.666933 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 170 2026-04-05 00:39:14.681669 | orchestrator | + osism apply --environment custom workarounds 2026-04-05 00:39:16.086089 | orchestrator | 2026-04-05 00:39:16 | INFO  | Trying to run play workarounds in environment custom 2026-04-05 00:39:26.204523 | orchestrator | 2026-04-05 00:39:26 | INFO  | Prepare task for execution of workarounds. 2026-04-05 00:39:26.287946 | orchestrator | 2026-04-05 00:39:26 | INFO  | Task 4508a732-5c6f-4889-ae5e-f20b90bf9159 (workarounds) was prepared for execution. 2026-04-05 00:39:26.288041 | orchestrator | 2026-04-05 00:39:26 | INFO  | It takes a moment until task 4508a732-5c6f-4889-ae5e-f20b90bf9159 (workarounds) has been started and output is visible here. 2026-04-05 00:39:51.919280 | orchestrator | 2026-04-05 00:39:51.919480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:39:51.919497 | orchestrator | 2026-04-05 00:39:51.919510 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-05 00:39:51.919522 | orchestrator | Sunday 05 April 2026 00:39:29 +0000 (0:00:00.185) 0:00:00.185 ********** 2026-04-05 00:39:51.919533 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919545 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919555 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919566 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919577 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919588 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919598 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-05 00:39:51.919609 | orchestrator | 2026-04-05 00:39:51.919620 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-05 00:39:51.919631 | orchestrator | 2026-04-05 00:39:51.919642 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:39:51.919653 | orchestrator | Sunday 05 April 2026 00:39:30 +0000 (0:00:00.746) 0:00:00.931 ********** 2026-04-05 00:39:51.919664 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:51.919676 | orchestrator | 2026-04-05 00:39:51.919686 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-05 00:39:51.919697 | orchestrator | 2026-04-05 00:39:51.919708 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:39:51.919719 | orchestrator | Sunday 05 April 2026 00:39:33 +0000 (0:00:02.972) 0:00:03.903 ********** 2026-04-05 00:39:51.919730 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:51.919741 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:51.919752 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:51.919762 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:51.919773 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:51.919784 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:51.919795 | orchestrator | 2026-04-05 00:39:51.919805 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-05 00:39:51.919817 | orchestrator | 2026-04-05 00:39:51.919828 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-05 00:39:51.919839 | orchestrator | Sunday 05 April 2026 00:39:35 +0000 (0:00:02.355) 0:00:06.259 ********** 2026-04-05 00:39:51.919850 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919862 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919873 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919884 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919894 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919905 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:39:51.919940 | orchestrator | 2026-04-05 00:39:51.919952 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-05 00:39:51.919962 | orchestrator | Sunday 05 April 2026 00:39:37 +0000 (0:00:01.439) 0:00:07.698 ********** 2026-04-05 00:39:51.919973 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:51.919984 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:51.919995 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:51.920005 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:51.920016 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:51.920026 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:51.920036 | orchestrator | 2026-04-05 00:39:51.920047 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-05 00:39:51.920072 | orchestrator | Sunday 05 April 2026 00:39:41 +0000 (0:00:03.936) 0:00:11.634 ********** 2026-04-05 00:39:51.920083 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:39:51.920094 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:39:51.920104 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:39:51.920115 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:39:51.920125 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:39:51.920135 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:39:51.920146 | orchestrator | 2026-04-05 00:39:51.920157 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-05 00:39:51.920167 | orchestrator | 2026-04-05 00:39:51.920178 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-05 00:39:51.920188 | orchestrator | Sunday 05 April 2026 00:39:41 +0000 (0:00:00.546) 0:00:12.181 ********** 2026-04-05 00:39:51.920200 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:51.920211 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:51.920221 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:51.920232 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:51.920242 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:51.920253 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:51.920263 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:51.920274 | orchestrator | 2026-04-05 00:39:51.920285 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-05 00:39:51.920323 | orchestrator | Sunday 05 April 2026 00:39:43 +0000 (0:00:01.811) 0:00:13.993 ********** 2026-04-05 00:39:51.920334 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:51.920345 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:51.920355 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:51.920366 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:51.920377 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:51.920388 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:51.920417 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:51.920429 | orchestrator | 2026-04-05 00:39:51.920439 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-05 00:39:51.920450 | orchestrator | Sunday 05 April 2026 00:39:44 +0000 (0:00:01.494) 0:00:15.487 ********** 2026-04-05 00:39:51.920461 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:51.920471 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:51.920482 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:51.920492 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:51.920503 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:51.920513 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:51.920524 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:51.920534 | orchestrator | 2026-04-05 00:39:51.920545 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-05 00:39:51.920556 | orchestrator | Sunday 05 April 2026 00:39:46 +0000 (0:00:01.737) 0:00:17.224 ********** 2026-04-05 00:39:51.920566 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:51.920577 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:51.920587 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:51.920598 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:51.920608 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:51.920628 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:51.920639 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:51.920649 | orchestrator | 2026-04-05 00:39:51.920660 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-05 00:39:51.920670 | orchestrator | Sunday 05 April 2026 00:39:48 +0000 (0:00:01.626) 0:00:18.851 ********** 2026-04-05 00:39:51.920681 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:39:51.920691 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:39:51.920702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:39:51.920712 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:39:51.920723 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:39:51.920733 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:39:51.920743 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:39:51.920754 | orchestrator | 2026-04-05 00:39:51.920765 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-05 00:39:51.920776 | orchestrator | 2026-04-05 00:39:51.920786 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-05 00:39:51.920797 | orchestrator | Sunday 05 April 2026 00:39:49 +0000 (0:00:00.765) 0:00:19.616 ********** 2026-04-05 00:39:51.920807 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:51.920818 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:51.920829 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:51.920839 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:51.920849 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:51.920860 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:51.920870 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:51.920881 | orchestrator | 2026-04-05 00:39:51.920891 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:39:51.920903 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:51.920916 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.920926 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.920944 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.920963 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.920979 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.920996 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:39:51.921012 | orchestrator | 2026-04-05 00:39:51.921029 | orchestrator | 2026-04-05 00:39:51.921053 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:39:51.921070 | orchestrator | Sunday 05 April 2026 00:39:51 +0000 (0:00:02.883) 0:00:22.499 ********** 2026-04-05 00:39:51.921086 | orchestrator | =============================================================================== 2026-04-05 00:39:51.921102 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2026-04-05 00:39:51.921118 | orchestrator | Apply netplan configuration --------------------------------------------- 2.97s 2026-04-05 00:39:51.921135 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-04-05 00:39:51.921151 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2026-04-05 00:39:51.921169 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.81s 2026-04-05 00:39:51.921195 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.74s 2026-04-05 00:39:51.921211 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.63s 2026-04-05 00:39:51.921227 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-04-05 00:39:51.921243 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2026-04-05 00:39:51.921259 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.77s 2026-04-05 00:39:51.921277 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2026-04-05 00:39:51.921335 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2026-04-05 00:39:52.450678 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:40:03.804800 | orchestrator | 2026-04-05 00:40:03 | INFO  | Prepare task for execution of reboot. 2026-04-05 00:40:03.880624 | orchestrator | 2026-04-05 00:40:03 | INFO  | Task 46c95475-0114-46a4-9434-a4dda2b09e68 (reboot) was prepared for execution. 2026-04-05 00:40:03.880710 | orchestrator | 2026-04-05 00:40:03 | INFO  | It takes a moment until task 46c95475-0114-46a4-9434-a4dda2b09e68 (reboot) has been started and output is visible here. 2026-04-05 00:40:14.840087 | orchestrator | 2026-04-05 00:40:14.840210 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.840227 | orchestrator | 2026-04-05 00:40:14.840240 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.840251 | orchestrator | Sunday 05 April 2026 00:40:06 +0000 (0:00:00.228) 0:00:00.228 ********** 2026-04-05 00:40:14.840262 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:14.840274 | orchestrator | 2026-04-05 00:40:14.840286 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.840297 | orchestrator | Sunday 05 April 2026 00:40:06 +0000 (0:00:00.128) 0:00:00.356 ********** 2026-04-05 00:40:14.840308 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:40:14.840345 | orchestrator | 2026-04-05 00:40:14.840356 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.840367 | orchestrator | Sunday 05 April 2026 00:40:08 +0000 (0:00:01.178) 0:00:01.534 ********** 2026-04-05 00:40:14.840378 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:40:14.840389 | orchestrator | 2026-04-05 00:40:14.840400 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.840411 | orchestrator | 2026-04-05 00:40:14.840421 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.840433 | orchestrator | Sunday 05 April 2026 00:40:08 +0000 (0:00:00.099) 0:00:01.633 ********** 2026-04-05 00:40:14.840443 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:14.840454 | orchestrator | 2026-04-05 00:40:14.840465 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.840476 | orchestrator | Sunday 05 April 2026 00:40:08 +0000 (0:00:00.086) 0:00:01.720 ********** 2026-04-05 00:40:14.840487 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:40:14.840498 | orchestrator | 2026-04-05 00:40:14.840509 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.840520 | orchestrator | Sunday 05 April 2026 00:40:09 +0000 (0:00:01.031) 0:00:02.752 ********** 2026-04-05 00:40:14.840531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:40:14.840541 | orchestrator | 2026-04-05 00:40:14.840552 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.840563 | orchestrator | 2026-04-05 00:40:14.840574 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.840585 | orchestrator | Sunday 05 April 2026 00:40:09 +0000 (0:00:00.114) 0:00:02.866 ********** 2026-04-05 00:40:14.840599 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:14.840619 | orchestrator | 2026-04-05 00:40:14.840639 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.840693 | orchestrator | Sunday 05 April 2026 00:40:09 +0000 (0:00:00.096) 0:00:02.963 ********** 2026-04-05 00:40:14.840716 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:40:14.840735 | orchestrator | 2026-04-05 00:40:14.840758 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.840779 | orchestrator | Sunday 05 April 2026 00:40:10 +0000 (0:00:01.011) 0:00:03.975 ********** 2026-04-05 00:40:14.840800 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:40:14.840817 | orchestrator | 2026-04-05 00:40:14.840830 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.840845 | orchestrator | 2026-04-05 00:40:14.840863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.840882 | orchestrator | Sunday 05 April 2026 00:40:10 +0000 (0:00:00.120) 0:00:04.095 ********** 2026-04-05 00:40:14.840901 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:14.840921 | orchestrator | 2026-04-05 00:40:14.840942 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.840981 | orchestrator | Sunday 05 April 2026 00:40:10 +0000 (0:00:00.108) 0:00:04.204 ********** 2026-04-05 00:40:14.841003 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:40:14.841021 | orchestrator | 2026-04-05 00:40:14.841041 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.841053 | orchestrator | Sunday 05 April 2026 00:40:11 +0000 (0:00:00.951) 0:00:05.156 ********** 2026-04-05 00:40:14.841064 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:40:14.841075 | orchestrator | 2026-04-05 00:40:14.841085 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.841096 | orchestrator | 2026-04-05 00:40:14.841106 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.841117 | orchestrator | Sunday 05 April 2026 00:40:11 +0000 (0:00:00.137) 0:00:05.294 ********** 2026-04-05 00:40:14.841128 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:14.841138 | orchestrator | 2026-04-05 00:40:14.841149 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.841160 | orchestrator | Sunday 05 April 2026 00:40:12 +0000 (0:00:00.242) 0:00:05.536 ********** 2026-04-05 00:40:14.841170 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:40:14.841181 | orchestrator | 2026-04-05 00:40:14.841192 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.841203 | orchestrator | Sunday 05 April 2026 00:40:13 +0000 (0:00:01.036) 0:00:06.573 ********** 2026-04-05 00:40:14.841213 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:40:14.841224 | orchestrator | 2026-04-05 00:40:14.841235 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:40:14.841245 | orchestrator | 2026-04-05 00:40:14.841256 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:40:14.841266 | orchestrator | Sunday 05 April 2026 00:40:13 +0000 (0:00:00.138) 0:00:06.711 ********** 2026-04-05 00:40:14.841277 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:14.841288 | orchestrator | 2026-04-05 00:40:14.841298 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:40:14.841309 | orchestrator | Sunday 05 April 2026 00:40:13 +0000 (0:00:00.110) 0:00:06.821 ********** 2026-04-05 00:40:14.841362 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:40:14.841373 | orchestrator | 2026-04-05 00:40:14.841384 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:40:14.841395 | orchestrator | Sunday 05 April 2026 00:40:14 +0000 (0:00:01.074) 0:00:07.895 ********** 2026-04-05 00:40:14.841427 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:40:14.841438 | orchestrator | 2026-04-05 00:40:14.841449 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:40:14.841461 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841485 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841496 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841507 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841518 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841529 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:40:14.841539 | orchestrator | 2026-04-05 00:40:14.841550 | orchestrator | 2026-04-05 00:40:14.841561 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:40:14.841572 | orchestrator | Sunday 05 April 2026 00:40:14 +0000 (0:00:00.039) 0:00:07.935 ********** 2026-04-05 00:40:14.841583 | orchestrator | =============================================================================== 2026-04-05 00:40:14.841593 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.28s 2026-04-05 00:40:14.841604 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-04-05 00:40:14.841615 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-04-05 00:40:15.053783 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:40:26.431820 | orchestrator | 2026-04-05 00:40:26 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-05 00:40:26.520533 | orchestrator | 2026-04-05 00:40:26 | INFO  | Task 21c10a9b-bf55-496f-8746-2cb9a8ef4bf5 (wait-for-connection) was prepared for execution. 2026-04-05 00:40:26.520611 | orchestrator | 2026-04-05 00:40:26 | INFO  | It takes a moment until task 21c10a9b-bf55-496f-8746-2cb9a8ef4bf5 (wait-for-connection) has been started and output is visible here. 2026-04-05 00:40:41.926153 | orchestrator | 2026-04-05 00:40:41.926244 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-05 00:40:41.926255 | orchestrator | 2026-04-05 00:40:41.926263 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-05 00:40:41.926271 | orchestrator | Sunday 05 April 2026 00:40:29 +0000 (0:00:00.346) 0:00:00.346 ********** 2026-04-05 00:40:41.926279 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:40:41.926302 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:40:41.926310 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:40:41.926317 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:40:41.926332 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:40:41.926375 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:40:41.926383 | orchestrator | 2026-04-05 00:40:41.926391 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:40:41.926399 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926408 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926416 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926424 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926431 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926460 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:40:41.926468 | orchestrator | 2026-04-05 00:40:41.926475 | orchestrator | 2026-04-05 00:40:41.926482 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:40:41.926490 | orchestrator | Sunday 05 April 2026 00:40:41 +0000 (0:00:11.620) 0:00:11.967 ********** 2026-04-05 00:40:41.926497 | orchestrator | =============================================================================== 2026-04-05 00:40:41.926504 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2026-04-05 00:40:42.126285 | orchestrator | + osism apply hddtemp 2026-04-05 00:40:53.580978 | orchestrator | 2026-04-05 00:40:53 | INFO  | Prepare task for execution of hddtemp. 2026-04-05 00:40:53.658326 | orchestrator | 2026-04-05 00:40:53 | INFO  | Task 3ca5d014-ca16-4fcf-b8b4-c1d96cb23ab8 (hddtemp) was prepared for execution. 2026-04-05 00:40:53.658441 | orchestrator | 2026-04-05 00:40:53 | INFO  | It takes a moment until task 3ca5d014-ca16-4fcf-b8b4-c1d96cb23ab8 (hddtemp) has been started and output is visible here. 2026-04-05 00:41:22.667111 | orchestrator | 2026-04-05 00:41:22.667254 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-05 00:41:22.667271 | orchestrator | 2026-04-05 00:41:22.667284 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-05 00:41:22.667296 | orchestrator | Sunday 05 April 2026 00:40:57 +0000 (0:00:00.386) 0:00:00.386 ********** 2026-04-05 00:41:22.667308 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:22.667320 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:22.667331 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:22.667342 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:22.667353 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:22.667364 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:22.667444 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:22.667455 | orchestrator | 2026-04-05 00:41:22.667466 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-05 00:41:22.667477 | orchestrator | Sunday 05 April 2026 00:40:57 +0000 (0:00:00.657) 0:00:01.044 ********** 2026-04-05 00:41:22.667491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:41:22.667505 | orchestrator | 2026-04-05 00:41:22.667516 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-05 00:41:22.667528 | orchestrator | Sunday 05 April 2026 00:40:59 +0000 (0:00:01.178) 0:00:02.222 ********** 2026-04-05 00:41:22.667538 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:22.667549 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:22.667584 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:22.667596 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:22.667606 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:22.667618 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:22.667631 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:22.667644 | orchestrator | 2026-04-05 00:41:22.667656 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-05 00:41:22.667669 | orchestrator | Sunday 05 April 2026 00:41:01 +0000 (0:00:02.832) 0:00:05.055 ********** 2026-04-05 00:41:22.667681 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:22.667695 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:22.667709 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:22.667722 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:22.667734 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:22.667747 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:22.667759 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:22.667772 | orchestrator | 2026-04-05 00:41:22.667783 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-05 00:41:22.667820 | orchestrator | Sunday 05 April 2026 00:41:02 +0000 (0:00:01.027) 0:00:06.083 ********** 2026-04-05 00:41:22.667832 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:22.667843 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:22.667854 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:22.667864 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:22.667875 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:22.667886 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:22.667896 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:22.667907 | orchestrator | 2026-04-05 00:41:22.667918 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-05 00:41:22.667929 | orchestrator | Sunday 05 April 2026 00:41:04 +0000 (0:00:01.376) 0:00:07.459 ********** 2026-04-05 00:41:22.667939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:41:22.667950 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:41:22.667966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:41:22.667977 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:41:22.667988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:41:22.667998 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:22.668009 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:41:22.668019 | orchestrator | 2026-04-05 00:41:22.668030 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-05 00:41:22.668041 | orchestrator | Sunday 05 April 2026 00:41:04 +0000 (0:00:00.658) 0:00:08.118 ********** 2026-04-05 00:41:22.668052 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:22.668063 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:22.668073 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:22.668084 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:22.668094 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:22.668105 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:22.668116 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:22.668126 | orchestrator | 2026-04-05 00:41:22.668137 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-05 00:41:22.668148 | orchestrator | Sunday 05 April 2026 00:41:19 +0000 (0:00:14.155) 0:00:22.273 ********** 2026-04-05 00:41:22.668159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:41:22.668170 | orchestrator | 2026-04-05 00:41:22.668181 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-05 00:41:22.668192 | orchestrator | Sunday 05 April 2026 00:41:20 +0000 (0:00:01.274) 0:00:23.548 ********** 2026-04-05 00:41:22.668203 | orchestrator | changed: [testbed-manager] 2026-04-05 00:41:22.668213 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:41:22.668224 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:41:22.668235 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:41:22.668246 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:41:22.668256 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:41:22.668267 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:41:22.668278 | orchestrator | 2026-04-05 00:41:22.668288 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:41:22.668300 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:41:22.668335 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668347 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668358 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668401 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668412 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668423 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:22.668434 | orchestrator | 2026-04-05 00:41:22.668445 | orchestrator | 2026-04-05 00:41:22.668456 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:41:22.668467 | orchestrator | Sunday 05 April 2026 00:41:22 +0000 (0:00:01.969) 0:00:25.518 ********** 2026-04-05 00:41:22.668478 | orchestrator | =============================================================================== 2026-04-05 00:41:22.668489 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.16s 2026-04-05 00:41:22.668500 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.83s 2026-04-05 00:41:22.668511 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-04-05 00:41:22.668522 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.38s 2026-04-05 00:41:22.668533 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-04-05 00:41:22.668544 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2026-04-05 00:41:22.668554 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2026-04-05 00:41:22.668565 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2026-04-05 00:41:22.668576 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.66s 2026-04-05 00:41:22.861706 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-05 00:41:22.921694 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:41:22.921834 | orchestrator | + sudo systemctl restart manager.service 2026-04-05 00:41:36.537007 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:41:36.537137 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:41:36.537155 | orchestrator | + local max_attempts=60 2026-04-05 00:41:36.537167 | orchestrator | + local name=ceph-ansible 2026-04-05 00:41:36.537178 | orchestrator | + local attempt_num=1 2026-04-05 00:41:36.537190 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:36.590944 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:36.591041 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:36.591057 | orchestrator | + sleep 5 2026-04-05 00:41:41.596154 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:41.636947 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:41.637044 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:41.637058 | orchestrator | + sleep 5 2026-04-05 00:41:46.640820 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:46.687765 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:46.687868 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:46.687883 | orchestrator | + sleep 5 2026-04-05 00:41:51.694157 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:51.739689 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:51.739814 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:51.739841 | orchestrator | + sleep 5 2026-04-05 00:41:56.744030 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:56.787609 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:56.787742 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:41:56.787760 | orchestrator | + sleep 5 2026-04-05 00:42:01.791357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:01.835056 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:01.835174 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:01.835189 | orchestrator | + sleep 5 2026-04-05 00:42:06.839984 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:06.880459 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:06.880568 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:06.880580 | orchestrator | + sleep 5 2026-04-05 00:42:11.887569 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:11.926740 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:11.926817 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:11.926827 | orchestrator | + sleep 5 2026-04-05 00:42:16.930803 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:16.969879 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:16.969962 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:16.969973 | orchestrator | + sleep 5 2026-04-05 00:42:21.975603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:22.017000 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:22.017113 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:22.017129 | orchestrator | + sleep 5 2026-04-05 00:42:27.021315 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:27.066615 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:27.066690 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:27.066702 | orchestrator | + sleep 5 2026-04-05 00:42:32.071016 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:32.110406 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:32.110546 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:32.110561 | orchestrator | + sleep 5 2026-04-05 00:42:37.114778 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:37.158727 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:37.158832 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:42:37.158845 | orchestrator | + sleep 5 2026-04-05 00:42:42.163792 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:42:42.204065 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:42.204166 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:42:42.204181 | orchestrator | + local max_attempts=60 2026-04-05 00:42:42.204193 | orchestrator | + local name=kolla-ansible 2026-04-05 00:42:42.204204 | orchestrator | + local attempt_num=1 2026-04-05 00:42:42.205193 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:42:42.236833 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:42.236934 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:42:42.236949 | orchestrator | + local max_attempts=60 2026-04-05 00:42:42.236960 | orchestrator | + local name=osism-ansible 2026-04-05 00:42:42.236970 | orchestrator | + local attempt_num=1 2026-04-05 00:42:42.237591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:42:42.266341 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:42:42.266491 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:42:42.266507 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:42:42.437970 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-05 00:42:42.602081 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-05 00:42:42.797158 | orchestrator | ARA in osism-ansible already disabled. 2026-04-05 00:42:42.960635 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-05 00:42:42.961684 | orchestrator | + osism apply gather-facts 2026-04-05 00:42:54.504279 | orchestrator | 2026-04-05 00:42:54 | INFO  | Prepare task for execution of gather-facts. 2026-04-05 00:42:54.591516 | orchestrator | 2026-04-05 00:42:54 | INFO  | Task 1c807230-7f0d-413a-af56-f165d234469d (gather-facts) was prepared for execution. 2026-04-05 00:42:54.591671 | orchestrator | 2026-04-05 00:42:54 | INFO  | It takes a moment until task 1c807230-7f0d-413a-af56-f165d234469d (gather-facts) has been started and output is visible here. 2026-04-05 00:43:07.068622 | orchestrator | 2026-04-05 00:43:07.068725 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:43:07.068748 | orchestrator | 2026-04-05 00:43:07.068761 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:43:07.068795 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-04-05 00:43:07.068807 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:43:07.068818 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:43:07.068830 | orchestrator | ok: [testbed-manager] 2026-04-05 00:43:07.068840 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:43:07.068851 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:43:07.068862 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:07.068873 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:07.068884 | orchestrator | 2026-04-05 00:43:07.068895 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:43:07.068906 | orchestrator | 2026-04-05 00:43:07.068917 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:43:07.068928 | orchestrator | Sunday 05 April 2026 00:43:06 +0000 (0:00:08.611) 0:00:08.875 ********** 2026-04-05 00:43:07.068939 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:43:07.068950 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:43:07.068961 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:43:07.068971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:43:07.068982 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:07.068993 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:07.069004 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:07.069014 | orchestrator | 2026-04-05 00:43:07.069025 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:43:07.069049 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069062 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069073 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069084 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069094 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069105 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069116 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:43:07.069127 | orchestrator | 2026-04-05 00:43:07.069137 | orchestrator | 2026-04-05 00:43:07.069148 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:43:07.069163 | orchestrator | Sunday 05 April 2026 00:43:06 +0000 (0:00:00.632) 0:00:09.508 ********** 2026-04-05 00:43:07.069182 | orchestrator | =============================================================================== 2026-04-05 00:43:07.069201 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.61s 2026-04-05 00:43:07.069219 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-04-05 00:43:07.202699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-05 00:43:07.213751 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-05 00:43:07.234177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-05 00:43:07.247310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-05 00:43:07.259228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-05 00:43:07.269234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-05 00:43:07.285059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-05 00:43:07.295056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-05 00:43:07.305265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-05 00:43:07.316406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-05 00:43:07.327100 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-05 00:43:07.339745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-05 00:43:07.354560 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-05 00:43:07.374575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-05 00:43:07.392720 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-05 00:43:07.410354 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-05 00:43:07.432126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-05 00:43:07.449177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-05 00:43:07.470380 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-05 00:43:07.488840 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-05 00:43:07.507649 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-05 00:43:07.525416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-05 00:43:07.544738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-05 00:43:07.563893 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 00:43:07.946928 | orchestrator | ok: Runtime: 0:24:55.969324 2026-04-05 00:43:08.055248 | 2026-04-05 00:43:08.055402 | TASK [Deploy services] 2026-04-05 00:43:08.590589 | orchestrator | skipping: Conditional result was False 2026-04-05 00:43:08.607647 | 2026-04-05 00:43:08.608518 | TASK [Deploy in a nutshell] 2026-04-05 00:43:09.319010 | orchestrator | 2026-04-05 00:43:09.319179 | orchestrator | # PULL IMAGES 2026-04-05 00:43:09.319201 | orchestrator | 2026-04-05 00:43:09.319216 | orchestrator | + set -e 2026-04-05 00:43:09.319233 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:43:09.319252 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:43:09.319266 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:43:09.319309 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:43:09.319330 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:43:09.319344 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:43:09.319356 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:43:09.319373 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:43:09.319385 | orchestrator | ++ export CEPH_VERSION= 2026-04-05 00:43:09.319402 | orchestrator | ++ CEPH_VERSION= 2026-04-05 00:43:09.319414 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:43:09.319452 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:43:09.319464 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 00:43:09.319484 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 00:43:09.319495 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-05 00:43:09.319506 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-05 00:43:09.319519 | orchestrator | ++ export ARA=false 2026-04-05 00:43:09.319531 | orchestrator | ++ ARA=false 2026-04-05 00:43:09.319541 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:43:09.319552 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:43:09.319564 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:43:09.319574 | orchestrator | ++ TEMPEST=true 2026-04-05 00:43:09.319585 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:43:09.319596 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:43:09.319607 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:43:09.319618 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.32 2026-04-05 00:43:09.319628 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:43:09.319639 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:43:09.319650 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:43:09.319661 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:43:09.319672 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:43:09.319683 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:43:09.319694 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:43:09.319704 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:43:09.319715 | orchestrator | + echo 2026-04-05 00:43:09.319726 | orchestrator | + echo '# PULL IMAGES' 2026-04-05 00:43:09.319737 | orchestrator | + echo 2026-04-05 00:43:09.319766 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-05 00:43:09.371287 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 00:43:09.371357 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-05 00:43:10.595308 | orchestrator | 2026-04-05 00:43:10 | INFO  | Trying to run play pull-images in environment custom 2026-04-05 00:43:20.641804 | orchestrator | 2026-04-05 00:43:20 | INFO  | Prepare task for execution of pull-images. 2026-04-05 00:43:20.728867 | orchestrator | 2026-04-05 00:43:20 | INFO  | Task f152fac8-199c-4e11-8d9d-06f60a9e6923 (pull-images) was prepared for execution. 2026-04-05 00:43:20.728990 | orchestrator | 2026-04-05 00:43:20 | INFO  | Task f152fac8-199c-4e11-8d9d-06f60a9e6923 is running in background. No more output. Check ARA for logs. 2026-04-05 00:43:22.394975 | orchestrator | 2026-04-05 00:43:22 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-05 00:43:32.587374 | orchestrator | 2026-04-05 00:43:32 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-05 00:43:32.662915 | orchestrator | 2026-04-05 00:43:32 | INFO  | Task 9613d093-736b-4290-9666-15f8894496a8 (wipe-partitions) was prepared for execution. 2026-04-05 00:43:32.662992 | orchestrator | 2026-04-05 00:43:32 | INFO  | It takes a moment until task 9613d093-736b-4290-9666-15f8894496a8 (wipe-partitions) has been started and output is visible here. 2026-04-05 00:43:44.803596 | orchestrator | 2026-04-05 00:43:44.803715 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-05 00:43:44.803733 | orchestrator | 2026-04-05 00:43:44.803745 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-05 00:43:44.803762 | orchestrator | Sunday 05 April 2026 00:43:35 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-04-05 00:43:44.803774 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:44.803814 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:44.803827 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:44.803838 | orchestrator | 2026-04-05 00:43:44.803850 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-05 00:43:44.803862 | orchestrator | Sunday 05 April 2026 00:43:36 +0000 (0:00:00.969) 0:00:01.130 ********** 2026-04-05 00:43:44.803873 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:44.803888 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:44.803899 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:44.803910 | orchestrator | 2026-04-05 00:43:44.803921 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-05 00:43:44.803933 | orchestrator | Sunday 05 April 2026 00:43:37 +0000 (0:00:00.279) 0:00:01.410 ********** 2026-04-05 00:43:44.803944 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:44.803956 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:43:44.803967 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:44.803978 | orchestrator | 2026-04-05 00:43:44.803989 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-05 00:43:44.804000 | orchestrator | Sunday 05 April 2026 00:43:37 +0000 (0:00:00.544) 0:00:01.955 ********** 2026-04-05 00:43:44.804012 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:43:44.804023 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:44.804033 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:44.804045 | orchestrator | 2026-04-05 00:43:44.804056 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-05 00:43:44.804067 | orchestrator | Sunday 05 April 2026 00:43:38 +0000 (0:00:00.314) 0:00:02.270 ********** 2026-04-05 00:43:44.804078 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:44.804095 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:44.804108 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:44.804121 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:44.804135 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:44.804147 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:44.804160 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:44.804173 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:44.804186 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:44.804199 | orchestrator | 2026-04-05 00:43:44.804212 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-05 00:43:44.804223 | orchestrator | Sunday 05 April 2026 00:43:39 +0000 (0:00:01.440) 0:00:03.711 ********** 2026-04-05 00:43:44.804234 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:44.804246 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:44.804257 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:44.804268 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:44.804279 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:44.804290 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:44.804301 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:44.804312 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:44.804323 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:44.804334 | orchestrator | 2026-04-05 00:43:44.804345 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-05 00:43:44.804356 | orchestrator | Sunday 05 April 2026 00:43:40 +0000 (0:00:01.397) 0:00:05.108 ********** 2026-04-05 00:43:44.804374 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:43:44.804386 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:43:44.804397 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:43:44.804408 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:43:44.804419 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:43:44.804440 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:43:44.804483 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:43:44.804494 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:43:44.804505 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:43:44.804516 | orchestrator | 2026-04-05 00:43:44.804527 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-05 00:43:44.804538 | orchestrator | Sunday 05 April 2026 00:43:43 +0000 (0:00:02.166) 0:00:07.275 ********** 2026-04-05 00:43:44.804549 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:44.804560 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:44.804571 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:44.804582 | orchestrator | 2026-04-05 00:43:44.804593 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-05 00:43:44.804604 | orchestrator | Sunday 05 April 2026 00:43:43 +0000 (0:00:00.636) 0:00:07.912 ********** 2026-04-05 00:43:44.804615 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:43:44.804626 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:43:44.804636 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:43:44.804647 | orchestrator | 2026-04-05 00:43:44.804659 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:43:44.804672 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:44.804684 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:44.804713 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:43:44.804725 | orchestrator | 2026-04-05 00:43:44.804736 | orchestrator | 2026-04-05 00:43:44.804747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:43:44.804758 | orchestrator | Sunday 05 April 2026 00:43:44 +0000 (0:00:00.864) 0:00:08.776 ********** 2026-04-05 00:43:44.804769 | orchestrator | =============================================================================== 2026-04-05 00:43:44.804779 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2026-04-05 00:43:44.804790 | orchestrator | Check device availability ----------------------------------------------- 1.44s 2026-04-05 00:43:44.804801 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2026-04-05 00:43:44.804812 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.97s 2026-04-05 00:43:44.804823 | orchestrator | Request device events from the kernel ----------------------------------- 0.86s 2026-04-05 00:43:44.804833 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-04-05 00:43:44.804844 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-05 00:43:44.804855 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.32s 2026-04-05 00:43:44.804866 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2026-04-05 00:43:56.344315 | orchestrator | 2026-04-05 00:43:56 | INFO  | Prepare task for execution of facts. 2026-04-05 00:43:56.425643 | orchestrator | 2026-04-05 00:43:56 | INFO  | Task ebd60052-1842-43a1-9175-9c5cbe5f8bcf (facts) was prepared for execution. 2026-04-05 00:43:56.425735 | orchestrator | 2026-04-05 00:43:56 | INFO  | It takes a moment until task ebd60052-1842-43a1-9175-9c5cbe5f8bcf (facts) has been started and output is visible here. 2026-04-05 00:44:09.032662 | orchestrator | 2026-04-05 00:44:09.032787 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:44:09.032809 | orchestrator | 2026-04-05 00:44:09.032826 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:44:09.032871 | orchestrator | Sunday 05 April 2026 00:43:59 +0000 (0:00:00.388) 0:00:00.389 ********** 2026-04-05 00:44:09.032884 | orchestrator | ok: [testbed-manager] 2026-04-05 00:44:09.032899 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:44:09.032912 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:44:09.032924 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:44:09.032937 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:09.032950 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:09.032963 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:09.032976 | orchestrator | 2026-04-05 00:44:09.032990 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:44:09.033003 | orchestrator | Sunday 05 April 2026 00:44:01 +0000 (0:00:01.299) 0:00:01.688 ********** 2026-04-05 00:44:09.033015 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:44:09.033028 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:44:09.033040 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:44:09.033053 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:44:09.033067 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:09.033081 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:09.033094 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:09.033108 | orchestrator | 2026-04-05 00:44:09.033122 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:44:09.033135 | orchestrator | 2026-04-05 00:44:09.033148 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:44:09.033162 | orchestrator | Sunday 05 April 2026 00:44:02 +0000 (0:00:01.230) 0:00:02.919 ********** 2026-04-05 00:44:09.033193 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:44:09.033208 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:44:09.033222 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:44:09.033235 | orchestrator | ok: [testbed-manager] 2026-04-05 00:44:09.033249 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:09.033263 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:09.033278 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:09.033291 | orchestrator | 2026-04-05 00:44:09.033305 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:44:09.033319 | orchestrator | 2026-04-05 00:44:09.033332 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:44:09.033346 | orchestrator | Sunday 05 April 2026 00:44:07 +0000 (0:00:05.493) 0:00:08.412 ********** 2026-04-05 00:44:09.033361 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:44:09.033375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:44:09.033389 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:44:09.033403 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:44:09.033418 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:09.033431 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:09.033445 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:09.033483 | orchestrator | 2026-04-05 00:44:09.033499 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:44:09.033513 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033529 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033543 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033556 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033569 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033582 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033612 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:44:09.033626 | orchestrator | 2026-04-05 00:44:09.033640 | orchestrator | 2026-04-05 00:44:09.033653 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:44:09.033665 | orchestrator | Sunday 05 April 2026 00:44:08 +0000 (0:00:00.606) 0:00:09.019 ********** 2026-04-05 00:44:09.033677 | orchestrator | =============================================================================== 2026-04-05 00:44:09.033690 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.49s 2026-04-05 00:44:09.033703 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-04-05 00:44:09.033716 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-04-05 00:44:09.033729 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-05 00:44:10.613674 | orchestrator | 2026-04-05 00:44:10 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-05 00:44:10.687100 | orchestrator | 2026-04-05 00:44:10 | INFO  | Task 5f2c50bf-43e6-4540-870a-0cc0e1c21c11 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-05 00:44:10.687187 | orchestrator | 2026-04-05 00:44:10 | INFO  | It takes a moment until task 5f2c50bf-43e6-4540-870a-0cc0e1c21c11 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-05 00:44:23.262336 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:44:23.262441 | orchestrator | 2.16.14 2026-04-05 00:44:23.262457 | orchestrator | 2026-04-05 00:44:23.262499 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:23.262513 | orchestrator | 2026-04-05 00:44:23.262524 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:23.262536 | orchestrator | Sunday 05 April 2026 00:44:15 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-04-05 00:44:23.262547 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:23.262559 | orchestrator | 2026-04-05 00:44:23.262570 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:23.262581 | orchestrator | Sunday 05 April 2026 00:44:15 +0000 (0:00:00.253) 0:00:00.579 ********** 2026-04-05 00:44:23.262592 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:23.262604 | orchestrator | 2026-04-05 00:44:23.262615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.262626 | orchestrator | Sunday 05 April 2026 00:44:15 +0000 (0:00:00.226) 0:00:00.805 ********** 2026-04-05 00:44:23.262637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:23.262648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:23.262659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:23.262680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:23.262691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:23.262702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:23.262713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:23.262724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:23.262735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:23.262745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:23.262756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:23.262786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:23.262798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:23.262809 | orchestrator | 2026-04-05 00:44:23.262820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.262830 | orchestrator | Sunday 05 April 2026 00:44:16 +0000 (0:00:00.375) 0:00:01.180 ********** 2026-04-05 00:44:23.262841 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.262852 | orchestrator | 2026-04-05 00:44:23.262862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.262873 | orchestrator | Sunday 05 April 2026 00:44:16 +0000 (0:00:00.512) 0:00:01.692 ********** 2026-04-05 00:44:23.262884 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.262895 | orchestrator | 2026-04-05 00:44:23.262905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.262916 | orchestrator | Sunday 05 April 2026 00:44:16 +0000 (0:00:00.232) 0:00:01.925 ********** 2026-04-05 00:44:23.262931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.262942 | orchestrator | 2026-04-05 00:44:23.262953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.262964 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.202) 0:00:02.127 ********** 2026-04-05 00:44:23.262975 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.262986 | orchestrator | 2026-04-05 00:44:23.262997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263007 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.203) 0:00:02.330 ********** 2026-04-05 00:44:23.263018 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263029 | orchestrator | 2026-04-05 00:44:23.263039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263050 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.211) 0:00:02.542 ********** 2026-04-05 00:44:23.263060 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263071 | orchestrator | 2026-04-05 00:44:23.263081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263092 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.224) 0:00:02.767 ********** 2026-04-05 00:44:23.263103 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263113 | orchestrator | 2026-04-05 00:44:23.263124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263134 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.203) 0:00:02.970 ********** 2026-04-05 00:44:23.263145 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263156 | orchestrator | 2026-04-05 00:44:23.263166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263177 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.228) 0:00:03.198 ********** 2026-04-05 00:44:23.263187 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886) 2026-04-05 00:44:23.263199 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886) 2026-04-05 00:44:23.263209 | orchestrator | 2026-04-05 00:44:23.263220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263247 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.475) 0:00:03.673 ********** 2026-04-05 00:44:23.263258 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087) 2026-04-05 00:44:23.263269 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087) 2026-04-05 00:44:23.263279 | orchestrator | 2026-04-05 00:44:23.263290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263300 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.431) 0:00:04.105 ********** 2026-04-05 00:44:23.263318 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966) 2026-04-05 00:44:23.263334 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966) 2026-04-05 00:44:23.263345 | orchestrator | 2026-04-05 00:44:23.263356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263367 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.708) 0:00:04.814 ********** 2026-04-05 00:44:23.263377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30) 2026-04-05 00:44:23.263388 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30) 2026-04-05 00:44:23.263398 | orchestrator | 2026-04-05 00:44:23.263409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:23.263419 | orchestrator | Sunday 05 April 2026 00:44:20 +0000 (0:00:00.664) 0:00:05.478 ********** 2026-04-05 00:44:23.263430 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:23.263440 | orchestrator | 2026-04-05 00:44:23.263451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263461 | orchestrator | Sunday 05 April 2026 00:44:21 +0000 (0:00:00.803) 0:00:06.282 ********** 2026-04-05 00:44:23.263491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:23.263502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:23.263512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:23.263523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:23.263534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:23.263544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:23.263555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:23.263565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:23.263576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:23.263586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:23.263597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:23.263608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:23.263618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:23.263629 | orchestrator | 2026-04-05 00:44:23.263639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263650 | orchestrator | Sunday 05 April 2026 00:44:21 +0000 (0:00:00.421) 0:00:06.704 ********** 2026-04-05 00:44:23.263661 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263671 | orchestrator | 2026-04-05 00:44:23.263682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263692 | orchestrator | Sunday 05 April 2026 00:44:21 +0000 (0:00:00.217) 0:00:06.921 ********** 2026-04-05 00:44:23.263703 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263713 | orchestrator | 2026-04-05 00:44:23.263724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263735 | orchestrator | Sunday 05 April 2026 00:44:22 +0000 (0:00:00.220) 0:00:07.142 ********** 2026-04-05 00:44:23.263745 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263756 | orchestrator | 2026-04-05 00:44:23.263766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263794 | orchestrator | Sunday 05 April 2026 00:44:22 +0000 (0:00:00.206) 0:00:07.349 ********** 2026-04-05 00:44:23.263804 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263815 | orchestrator | 2026-04-05 00:44:23.263826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263836 | orchestrator | Sunday 05 April 2026 00:44:22 +0000 (0:00:00.214) 0:00:07.563 ********** 2026-04-05 00:44:23.263847 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263857 | orchestrator | 2026-04-05 00:44:23.263868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263878 | orchestrator | Sunday 05 April 2026 00:44:22 +0000 (0:00:00.211) 0:00:07.774 ********** 2026-04-05 00:44:23.263889 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263900 | orchestrator | 2026-04-05 00:44:23.263910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:23.263921 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.195) 0:00:07.969 ********** 2026-04-05 00:44:23.263931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:23.263942 | orchestrator | 2026-04-05 00:44:23.263958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408592 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.216) 0:00:08.186 ********** 2026-04-05 00:44:31.408702 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.408719 | orchestrator | 2026-04-05 00:44:31.408731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408742 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.193) 0:00:08.379 ********** 2026-04-05 00:44:31.408753 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:44:31.408765 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:44:31.408776 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:44:31.408787 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:44:31.408798 | orchestrator | 2026-04-05 00:44:31.408809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408820 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:01.133) 0:00:09.513 ********** 2026-04-05 00:44:31.408831 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.408842 | orchestrator | 2026-04-05 00:44:31.408853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408864 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:00.206) 0:00:09.720 ********** 2026-04-05 00:44:31.408875 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.408885 | orchestrator | 2026-04-05 00:44:31.408914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408926 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.235) 0:00:09.955 ********** 2026-04-05 00:44:31.408937 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.408948 | orchestrator | 2026-04-05 00:44:31.408959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:31.408970 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.216) 0:00:10.172 ********** 2026-04-05 00:44:31.408981 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.408991 | orchestrator | 2026-04-05 00:44:31.409002 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:44:31.409013 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.199) 0:00:10.371 ********** 2026-04-05 00:44:31.409023 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:44:31.409034 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:44:31.409045 | orchestrator | 2026-04-05 00:44:31.409056 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:44:31.409067 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.191) 0:00:10.563 ********** 2026-04-05 00:44:31.409078 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409116 | orchestrator | 2026-04-05 00:44:31.409129 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:44:31.409142 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.154) 0:00:10.717 ********** 2026-04-05 00:44:31.409154 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409167 | orchestrator | 2026-04-05 00:44:31.409179 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:44:31.409192 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.156) 0:00:10.874 ********** 2026-04-05 00:44:31.409204 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409217 | orchestrator | 2026-04-05 00:44:31.409230 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:44:31.409242 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.145) 0:00:11.019 ********** 2026-04-05 00:44:31.409255 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:31.409268 | orchestrator | 2026-04-05 00:44:31.409280 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:44:31.409291 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.138) 0:00:11.158 ********** 2026-04-05 00:44:31.409303 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '157b1f80-825d-547a-87b1-b4c204357e87'}}) 2026-04-05 00:44:31.409315 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b6d430e-d9c3-5542-869b-9d02c8b92670'}}) 2026-04-05 00:44:31.409326 | orchestrator | 2026-04-05 00:44:31.409336 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:44:31.409347 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.159) 0:00:11.317 ********** 2026-04-05 00:44:31.409359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '157b1f80-825d-547a-87b1-b4c204357e87'}})  2026-04-05 00:44:31.409378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b6d430e-d9c3-5542-869b-9d02c8b92670'}})  2026-04-05 00:44:31.409389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409400 | orchestrator | 2026-04-05 00:44:31.409411 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:44:31.409422 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.152) 0:00:11.469 ********** 2026-04-05 00:44:31.409432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '157b1f80-825d-547a-87b1-b4c204357e87'}})  2026-04-05 00:44:31.409449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b6d430e-d9c3-5542-869b-9d02c8b92670'}})  2026-04-05 00:44:31.409460 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409495 | orchestrator | 2026-04-05 00:44:31.409506 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:44:31.409517 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.165) 0:00:11.635 ********** 2026-04-05 00:44:31.409528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '157b1f80-825d-547a-87b1-b4c204357e87'}})  2026-04-05 00:44:31.409557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b6d430e-d9c3-5542-869b-9d02c8b92670'}})  2026-04-05 00:44:31.409569 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409580 | orchestrator | 2026-04-05 00:44:31.409591 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:44:31.409602 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.379) 0:00:12.015 ********** 2026-04-05 00:44:31.409613 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:31.409624 | orchestrator | 2026-04-05 00:44:31.409635 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:44:31.409646 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.156) 0:00:12.172 ********** 2026-04-05 00:44:31.409657 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:31.409668 | orchestrator | 2026-04-05 00:44:31.409679 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:44:31.409699 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.143) 0:00:12.316 ********** 2026-04-05 00:44:31.409709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409720 | orchestrator | 2026-04-05 00:44:31.409732 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:44:31.409743 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.133) 0:00:12.450 ********** 2026-04-05 00:44:31.409754 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409765 | orchestrator | 2026-04-05 00:44:31.409776 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:44:31.409787 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.161) 0:00:12.611 ********** 2026-04-05 00:44:31.409798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409808 | orchestrator | 2026-04-05 00:44:31.409819 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:44:31.409830 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.136) 0:00:12.748 ********** 2026-04-05 00:44:31.409841 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:31.409852 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:31.409863 | orchestrator |  "sdb": { 2026-04-05 00:44:31.409874 | orchestrator |  "osd_lvm_uuid": "157b1f80-825d-547a-87b1-b4c204357e87" 2026-04-05 00:44:31.409885 | orchestrator |  }, 2026-04-05 00:44:31.409896 | orchestrator |  "sdc": { 2026-04-05 00:44:31.409907 | orchestrator |  "osd_lvm_uuid": "9b6d430e-d9c3-5542-869b-9d02c8b92670" 2026-04-05 00:44:31.409918 | orchestrator |  } 2026-04-05 00:44:31.409928 | orchestrator |  } 2026-04-05 00:44:31.409939 | orchestrator | } 2026-04-05 00:44:31.409950 | orchestrator | 2026-04-05 00:44:31.409961 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:44:31.409972 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.152) 0:00:12.900 ********** 2026-04-05 00:44:31.409983 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.409994 | orchestrator | 2026-04-05 00:44:31.410010 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:44:31.410089 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.143) 0:00:13.043 ********** 2026-04-05 00:44:31.410109 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.410128 | orchestrator | 2026-04-05 00:44:31.410147 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:44:31.410166 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.136) 0:00:13.180 ********** 2026-04-05 00:44:31.410185 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:31.410204 | orchestrator | 2026-04-05 00:44:31.410224 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:44:31.410244 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.130) 0:00:13.311 ********** 2026-04-05 00:44:31.410262 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:44:31.410281 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:44:31.410301 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:31.410320 | orchestrator |  "sdb": { 2026-04-05 00:44:31.410348 | orchestrator |  "osd_lvm_uuid": "157b1f80-825d-547a-87b1-b4c204357e87" 2026-04-05 00:44:31.410360 | orchestrator |  }, 2026-04-05 00:44:31.410371 | orchestrator |  "sdc": { 2026-04-05 00:44:31.410382 | orchestrator |  "osd_lvm_uuid": "9b6d430e-d9c3-5542-869b-9d02c8b92670" 2026-04-05 00:44:31.410393 | orchestrator |  } 2026-04-05 00:44:31.410403 | orchestrator |  }, 2026-04-05 00:44:31.410414 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:44:31.410425 | orchestrator |  { 2026-04-05 00:44:31.410436 | orchestrator |  "data": "osd-block-157b1f80-825d-547a-87b1-b4c204357e87", 2026-04-05 00:44:31.410446 | orchestrator |  "data_vg": "ceph-157b1f80-825d-547a-87b1-b4c204357e87" 2026-04-05 00:44:31.410457 | orchestrator |  }, 2026-04-05 00:44:31.410510 | orchestrator |  { 2026-04-05 00:44:31.410530 | orchestrator |  "data": "osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670", 2026-04-05 00:44:31.410549 | orchestrator |  "data_vg": "ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670" 2026-04-05 00:44:31.410568 | orchestrator |  } 2026-04-05 00:44:31.410585 | orchestrator |  ] 2026-04-05 00:44:31.410602 | orchestrator |  } 2026-04-05 00:44:31.410613 | orchestrator | } 2026-04-05 00:44:31.410624 | orchestrator | 2026-04-05 00:44:31.410635 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:44:31.410646 | orchestrator | Sunday 05 April 2026 00:44:28 +0000 (0:00:00.224) 0:00:13.535 ********** 2026-04-05 00:44:31.410657 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:31.410668 | orchestrator | 2026-04-05 00:44:31.410679 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:31.410690 | orchestrator | 2026-04-05 00:44:31.410701 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:31.410711 | orchestrator | Sunday 05 April 2026 00:44:30 +0000 (0:00:02.302) 0:00:15.838 ********** 2026-04-05 00:44:31.410722 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:31.410733 | orchestrator | 2026-04-05 00:44:31.410744 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:31.410755 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.266) 0:00:16.104 ********** 2026-04-05 00:44:31.410766 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:31.410777 | orchestrator | 2026-04-05 00:44:31.410798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.579908 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.233) 0:00:16.337 ********** 2026-04-05 00:44:39.580031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:39.580043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:39.580051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:39.580058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:39.580065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:39.580072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:39.580079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:39.580086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:39.580096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:39.580103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:39.580110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:39.580117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:39.580123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:39.580129 | orchestrator | 2026-04-05 00:44:39.580137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580144 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.391) 0:00:16.729 ********** 2026-04-05 00:44:39.580150 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580158 | orchestrator | 2026-04-05 00:44:39.580165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580171 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.254) 0:00:16.983 ********** 2026-04-05 00:44:39.580177 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580214 | orchestrator | 2026-04-05 00:44:39.580221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580249 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.219) 0:00:17.202 ********** 2026-04-05 00:44:39.580256 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580262 | orchestrator | 2026-04-05 00:44:39.580269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580276 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.210) 0:00:17.413 ********** 2026-04-05 00:44:39.580283 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580289 | orchestrator | 2026-04-05 00:44:39.580295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580301 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.196) 0:00:17.609 ********** 2026-04-05 00:44:39.580308 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580314 | orchestrator | 2026-04-05 00:44:39.580320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580326 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.225) 0:00:17.834 ********** 2026-04-05 00:44:39.580332 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580338 | orchestrator | 2026-04-05 00:44:39.580344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580350 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.603) 0:00:18.438 ********** 2026-04-05 00:44:39.580357 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580362 | orchestrator | 2026-04-05 00:44:39.580369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580376 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.217) 0:00:18.655 ********** 2026-04-05 00:44:39.580381 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580387 | orchestrator | 2026-04-05 00:44:39.580394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580402 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.204) 0:00:18.859 ********** 2026-04-05 00:44:39.580408 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869) 2026-04-05 00:44:39.580417 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869) 2026-04-05 00:44:39.580424 | orchestrator | 2026-04-05 00:44:39.580431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580438 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.409) 0:00:19.269 ********** 2026-04-05 00:44:39.580445 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2) 2026-04-05 00:44:39.580452 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2) 2026-04-05 00:44:39.580458 | orchestrator | 2026-04-05 00:44:39.580466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580495 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.509) 0:00:19.778 ********** 2026-04-05 00:44:39.580502 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767) 2026-04-05 00:44:39.580509 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767) 2026-04-05 00:44:39.580517 | orchestrator | 2026-04-05 00:44:39.580524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580556 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.520) 0:00:20.300 ********** 2026-04-05 00:44:39.580565 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92) 2026-04-05 00:44:39.580572 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92) 2026-04-05 00:44:39.580579 | orchestrator | 2026-04-05 00:44:39.580587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:39.580606 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.445) 0:00:20.745 ********** 2026-04-05 00:44:39.580614 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:39.580621 | orchestrator | 2026-04-05 00:44:39.580628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580636 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.332) 0:00:21.077 ********** 2026-04-05 00:44:39.580643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:39.580651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:39.580659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:39.580666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:39.580672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:39.580680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:39.580687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:39.580693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:39.580710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:39.580718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:39.580725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:39.580733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:39.580740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:39.580748 | orchestrator | 2026-04-05 00:44:39.580755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580763 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.413) 0:00:21.491 ********** 2026-04-05 00:44:39.580771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580778 | orchestrator | 2026-04-05 00:44:39.580785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580792 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.202) 0:00:21.694 ********** 2026-04-05 00:44:39.580799 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580807 | orchestrator | 2026-04-05 00:44:39.580814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580821 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.669) 0:00:22.363 ********** 2026-04-05 00:44:39.580828 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580834 | orchestrator | 2026-04-05 00:44:39.580841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580847 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.198) 0:00:22.562 ********** 2026-04-05 00:44:39.580853 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580860 | orchestrator | 2026-04-05 00:44:39.580867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580873 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.197) 0:00:22.759 ********** 2026-04-05 00:44:39.580879 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580886 | orchestrator | 2026-04-05 00:44:39.580893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580900 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.198) 0:00:22.957 ********** 2026-04-05 00:44:39.580907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580914 | orchestrator | 2026-04-05 00:44:39.580920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580934 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.218) 0:00:23.175 ********** 2026-04-05 00:44:39.580941 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580947 | orchestrator | 2026-04-05 00:44:39.580954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580961 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.235) 0:00:23.410 ********** 2026-04-05 00:44:39.580968 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:39.580975 | orchestrator | 2026-04-05 00:44:39.580982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.580989 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.214) 0:00:23.625 ********** 2026-04-05 00:44:39.580996 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:44:39.581004 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:44:39.581012 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:44:39.581020 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:44:39.581027 | orchestrator | 2026-04-05 00:44:39.581034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:39.581041 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.752) 0:00:24.377 ********** 2026-04-05 00:44:39.581048 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.495436 | orchestrator | 2026-04-05 00:44:46.495626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:46.495645 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.215) 0:00:24.592 ********** 2026-04-05 00:44:46.495657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.495669 | orchestrator | 2026-04-05 00:44:46.495681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:46.495692 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.201) 0:00:24.794 ********** 2026-04-05 00:44:46.495703 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.495713 | orchestrator | 2026-04-05 00:44:46.495724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:46.495735 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.219) 0:00:25.013 ********** 2026-04-05 00:44:46.495746 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.495785 | orchestrator | 2026-04-05 00:44:46.495799 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:44:46.495812 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.208) 0:00:25.221 ********** 2026-04-05 00:44:46.495826 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:44:46.495845 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:44:46.495863 | orchestrator | 2026-04-05 00:44:46.495883 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:44:46.495917 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.399) 0:00:25.621 ********** 2026-04-05 00:44:46.495951 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.495968 | orchestrator | 2026-04-05 00:44:46.495987 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:44:46.496006 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.138) 0:00:25.760 ********** 2026-04-05 00:44:46.496025 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496045 | orchestrator | 2026-04-05 00:44:46.496084 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:44:46.496116 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.149) 0:00:25.910 ********** 2026-04-05 00:44:46.496131 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496144 | orchestrator | 2026-04-05 00:44:46.496157 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:44:46.496168 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.142) 0:00:26.053 ********** 2026-04-05 00:44:46.496200 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:46.496233 | orchestrator | 2026-04-05 00:44:46.496245 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:44:46.496256 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.135) 0:00:26.189 ********** 2026-04-05 00:44:46.496267 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}}) 2026-04-05 00:44:46.496279 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}}) 2026-04-05 00:44:46.496290 | orchestrator | 2026-04-05 00:44:46.496301 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:44:46.496312 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.184) 0:00:26.374 ********** 2026-04-05 00:44:46.496323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}})  2026-04-05 00:44:46.496336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}})  2026-04-05 00:44:46.496346 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496357 | orchestrator | 2026-04-05 00:44:46.496368 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:44:46.496379 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.163) 0:00:26.537 ********** 2026-04-05 00:44:46.496389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}})  2026-04-05 00:44:46.496400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}})  2026-04-05 00:44:46.496411 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496423 | orchestrator | 2026-04-05 00:44:46.496434 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:44:46.496444 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.160) 0:00:26.698 ********** 2026-04-05 00:44:46.496455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}})  2026-04-05 00:44:46.496466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}})  2026-04-05 00:44:46.496507 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496527 | orchestrator | 2026-04-05 00:44:46.496539 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:44:46.496550 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.154) 0:00:26.852 ********** 2026-04-05 00:44:46.496561 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:46.496572 | orchestrator | 2026-04-05 00:44:46.496583 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:44:46.496594 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.148) 0:00:27.001 ********** 2026-04-05 00:44:46.496605 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:46.496616 | orchestrator | 2026-04-05 00:44:46.496626 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:44:46.496638 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.144) 0:00:27.145 ********** 2026-04-05 00:44:46.496670 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496682 | orchestrator | 2026-04-05 00:44:46.496693 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:44:46.496704 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.158) 0:00:27.303 ********** 2026-04-05 00:44:46.496715 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496725 | orchestrator | 2026-04-05 00:44:46.496736 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:44:46.496747 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.348) 0:00:27.652 ********** 2026-04-05 00:44:46.496757 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496768 | orchestrator | 2026-04-05 00:44:46.496779 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:44:46.496801 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.138) 0:00:27.791 ********** 2026-04-05 00:44:46.496812 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:44:46.496822 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:46.496833 | orchestrator |  "sdb": { 2026-04-05 00:44:46.496844 | orchestrator |  "osd_lvm_uuid": "2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff" 2026-04-05 00:44:46.496855 | orchestrator |  }, 2026-04-05 00:44:46.496866 | orchestrator |  "sdc": { 2026-04-05 00:44:46.496877 | orchestrator |  "osd_lvm_uuid": "e4b90bbc-8b4b-55ca-a382-2d9a937d0621" 2026-04-05 00:44:46.496888 | orchestrator |  } 2026-04-05 00:44:46.496899 | orchestrator |  } 2026-04-05 00:44:46.496910 | orchestrator | } 2026-04-05 00:44:46.496921 | orchestrator | 2026-04-05 00:44:46.496932 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:44:46.496943 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.190) 0:00:27.982 ********** 2026-04-05 00:44:46.496953 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.496964 | orchestrator | 2026-04-05 00:44:46.496975 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:44:46.496986 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.185) 0:00:28.167 ********** 2026-04-05 00:44:46.496996 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.497007 | orchestrator | 2026-04-05 00:44:46.497018 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:44:46.497028 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.130) 0:00:28.297 ********** 2026-04-05 00:44:46.497043 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:46.497062 | orchestrator | 2026-04-05 00:44:46.497080 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:44:46.497098 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.151) 0:00:28.449 ********** 2026-04-05 00:44:46.497117 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:44:46.497134 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:44:46.497151 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:46.497168 | orchestrator |  "sdb": { 2026-04-05 00:44:46.497185 | orchestrator |  "osd_lvm_uuid": "2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff" 2026-04-05 00:44:46.497202 | orchestrator |  }, 2026-04-05 00:44:46.497218 | orchestrator |  "sdc": { 2026-04-05 00:44:46.497235 | orchestrator |  "osd_lvm_uuid": "e4b90bbc-8b4b-55ca-a382-2d9a937d0621" 2026-04-05 00:44:46.497252 | orchestrator |  } 2026-04-05 00:44:46.497270 | orchestrator |  }, 2026-04-05 00:44:46.497288 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:44:46.497307 | orchestrator |  { 2026-04-05 00:44:46.497326 | orchestrator |  "data": "osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff", 2026-04-05 00:44:46.497345 | orchestrator |  "data_vg": "ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff" 2026-04-05 00:44:46.497364 | orchestrator |  }, 2026-04-05 00:44:46.497395 | orchestrator |  { 2026-04-05 00:44:46.497413 | orchestrator |  "data": "osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621", 2026-04-05 00:44:46.497432 | orchestrator |  "data_vg": "ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621" 2026-04-05 00:44:46.497451 | orchestrator |  } 2026-04-05 00:44:46.497470 | orchestrator |  ] 2026-04-05 00:44:46.497516 | orchestrator |  } 2026-04-05 00:44:46.497535 | orchestrator | } 2026-04-05 00:44:46.497553 | orchestrator | 2026-04-05 00:44:46.497571 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:44:46.497591 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.232) 0:00:28.682 ********** 2026-04-05 00:44:46.497609 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:46.497629 | orchestrator | 2026-04-05 00:44:46.497642 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:44:46.497665 | orchestrator | 2026-04-05 00:44:46.497676 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:46.497686 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:01.227) 0:00:29.909 ********** 2026-04-05 00:44:46.497698 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:46.497709 | orchestrator | 2026-04-05 00:44:46.497719 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:46.497730 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.507) 0:00:30.417 ********** 2026-04-05 00:44:46.497741 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:46.497752 | orchestrator | 2026-04-05 00:44:46.497762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:46.497773 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.689) 0:00:31.107 ********** 2026-04-05 00:44:46.497784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:44:46.497795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:44:46.497805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:44:46.497816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:44:46.497827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:44:46.497849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:44:54.610399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:44:54.610555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:44:54.610570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:44:54.610581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:44:54.610591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:44:54.610601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:44:54.610611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:44:54.610621 | orchestrator | 2026-04-05 00:44:54.610632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610643 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.404) 0:00:31.511 ********** 2026-04-05 00:44:54.610652 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610663 | orchestrator | 2026-04-05 00:44:54.610673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610683 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.207) 0:00:31.718 ********** 2026-04-05 00:44:54.610693 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610702 | orchestrator | 2026-04-05 00:44:54.610712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610722 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.200) 0:00:31.919 ********** 2026-04-05 00:44:54.610731 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610741 | orchestrator | 2026-04-05 00:44:54.610751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610760 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.236) 0:00:32.155 ********** 2026-04-05 00:44:54.610770 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610780 | orchestrator | 2026-04-05 00:44:54.610790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610799 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.185) 0:00:32.341 ********** 2026-04-05 00:44:54.610809 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610846 | orchestrator | 2026-04-05 00:44:54.610857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610866 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.194) 0:00:32.536 ********** 2026-04-05 00:44:54.610876 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610886 | orchestrator | 2026-04-05 00:44:54.610896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610905 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.205) 0:00:32.741 ********** 2026-04-05 00:44:54.610915 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610925 | orchestrator | 2026-04-05 00:44:54.610934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610947 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.227) 0:00:32.969 ********** 2026-04-05 00:44:54.610958 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.610970 | orchestrator | 2026-04-05 00:44:54.610982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.610993 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.200) 0:00:33.170 ********** 2026-04-05 00:44:54.611005 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0) 2026-04-05 00:44:54.611016 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0) 2026-04-05 00:44:54.611028 | orchestrator | 2026-04-05 00:44:54.611040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.611052 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.752) 0:00:33.922 ********** 2026-04-05 00:44:54.611063 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9) 2026-04-05 00:44:54.611074 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9) 2026-04-05 00:44:54.611086 | orchestrator | 2026-04-05 00:44:54.611099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.611117 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.786) 0:00:34.709 ********** 2026-04-05 00:44:54.611134 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304) 2026-04-05 00:44:54.611152 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304) 2026-04-05 00:44:54.611171 | orchestrator | 2026-04-05 00:44:54.611191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.611210 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.412) 0:00:35.121 ********** 2026-04-05 00:44:54.611223 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d) 2026-04-05 00:44:54.611252 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d) 2026-04-05 00:44:54.611264 | orchestrator | 2026-04-05 00:44:54.611276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:54.611287 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.432) 0:00:35.554 ********** 2026-04-05 00:44:54.611299 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:54.611310 | orchestrator | 2026-04-05 00:44:54.611320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611347 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.306) 0:00:35.860 ********** 2026-04-05 00:44:54.611362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:44:54.611388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:44:54.611410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:44:54.611428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:44:54.611464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:44:54.611510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:44:54.611529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:44:54.611549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:44:54.611567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:44:54.611586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:44:54.611598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:44:54.611609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:44:54.611620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:44:54.611630 | orchestrator | 2026-04-05 00:44:54.611641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611652 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.356) 0:00:36.217 ********** 2026-04-05 00:44:54.611670 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611681 | orchestrator | 2026-04-05 00:44:54.611692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611703 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.216) 0:00:36.433 ********** 2026-04-05 00:44:54.611714 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611725 | orchestrator | 2026-04-05 00:44:54.611736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611746 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.199) 0:00:36.632 ********** 2026-04-05 00:44:54.611757 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611768 | orchestrator | 2026-04-05 00:44:54.611779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611789 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.183) 0:00:36.816 ********** 2026-04-05 00:44:54.611800 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611811 | orchestrator | 2026-04-05 00:44:54.611822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611832 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.193) 0:00:37.009 ********** 2026-04-05 00:44:54.611843 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611854 | orchestrator | 2026-04-05 00:44:54.611864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611875 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.174) 0:00:37.184 ********** 2026-04-05 00:44:54.611886 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611897 | orchestrator | 2026-04-05 00:44:54.611907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611918 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.534) 0:00:37.718 ********** 2026-04-05 00:44:54.611929 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611939 | orchestrator | 2026-04-05 00:44:54.611950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.611961 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.195) 0:00:37.914 ********** 2026-04-05 00:44:54.611971 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.611982 | orchestrator | 2026-04-05 00:44:54.611993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.612003 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.216) 0:00:38.131 ********** 2026-04-05 00:44:54.612014 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:44:54.612025 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:44:54.612057 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:44:54.612069 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:44:54.612079 | orchestrator | 2026-04-05 00:44:54.612090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.612101 | orchestrator | Sunday 05 April 2026 00:44:53 +0000 (0:00:00.619) 0:00:38.750 ********** 2026-04-05 00:44:54.612112 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.612123 | orchestrator | 2026-04-05 00:44:54.612133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.612144 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.198) 0:00:38.948 ********** 2026-04-05 00:44:54.612155 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.612166 | orchestrator | 2026-04-05 00:44:54.612176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.612187 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.205) 0:00:39.154 ********** 2026-04-05 00:44:54.612198 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.612209 | orchestrator | 2026-04-05 00:44:54.612219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:54.612230 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.181) 0:00:39.335 ********** 2026-04-05 00:44:54.612241 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:54.612252 | orchestrator | 2026-04-05 00:44:54.612272 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:44:59.057840 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.205) 0:00:39.541 ********** 2026-04-05 00:44:59.057951 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:44:59.057967 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:44:59.057978 | orchestrator | 2026-04-05 00:44:59.057989 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:44:59.058000 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.153) 0:00:39.694 ********** 2026-04-05 00:44:59.058010 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058085 | orchestrator | 2026-04-05 00:44:59.058096 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:44:59.058107 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.131) 0:00:39.825 ********** 2026-04-05 00:44:59.058117 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058127 | orchestrator | 2026-04-05 00:44:59.058163 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:44:59.058173 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.129) 0:00:39.954 ********** 2026-04-05 00:44:59.058183 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058193 | orchestrator | 2026-04-05 00:44:59.058204 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:44:59.058215 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.183) 0:00:40.138 ********** 2026-04-05 00:44:59.058225 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:59.058236 | orchestrator | 2026-04-05 00:44:59.058247 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:44:59.058257 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.278) 0:00:40.417 ********** 2026-04-05 00:44:59.058267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}}) 2026-04-05 00:44:59.058278 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecfcc343-98df-5597-aad3-97c87b883418'}}) 2026-04-05 00:44:59.058288 | orchestrator | 2026-04-05 00:44:59.058298 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:44:59.058327 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.160) 0:00:40.577 ********** 2026-04-05 00:44:59.058340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}})  2026-04-05 00:44:59.058377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecfcc343-98df-5597-aad3-97c87b883418'}})  2026-04-05 00:44:59.058389 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058401 | orchestrator | 2026-04-05 00:44:59.058413 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:44:59.058425 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.154) 0:00:40.731 ********** 2026-04-05 00:44:59.058436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}})  2026-04-05 00:44:59.058448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecfcc343-98df-5597-aad3-97c87b883418'}})  2026-04-05 00:44:59.058460 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058471 | orchestrator | 2026-04-05 00:44:59.058512 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:44:59.058524 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.160) 0:00:40.892 ********** 2026-04-05 00:44:59.058535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}})  2026-04-05 00:44:59.058547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecfcc343-98df-5597-aad3-97c87b883418'}})  2026-04-05 00:44:59.058559 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058570 | orchestrator | 2026-04-05 00:44:59.058582 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:44:59.058593 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.159) 0:00:41.051 ********** 2026-04-05 00:44:59.058604 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:59.058616 | orchestrator | 2026-04-05 00:44:59.058627 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:44:59.058638 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.135) 0:00:41.187 ********** 2026-04-05 00:44:59.058650 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:44:59.058661 | orchestrator | 2026-04-05 00:44:59.058673 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:44:59.058684 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.164) 0:00:41.351 ********** 2026-04-05 00:44:59.058696 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058707 | orchestrator | 2026-04-05 00:44:59.058719 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:44:59.058728 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.158) 0:00:41.510 ********** 2026-04-05 00:44:59.058738 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058747 | orchestrator | 2026-04-05 00:44:59.058757 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:44:59.058767 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.143) 0:00:41.654 ********** 2026-04-05 00:44:59.058776 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058786 | orchestrator | 2026-04-05 00:44:59.058795 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:44:59.058805 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.144) 0:00:41.798 ********** 2026-04-05 00:44:59.058815 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:44:59.058825 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:59.058834 | orchestrator |  "sdb": { 2026-04-05 00:44:59.058863 | orchestrator |  "osd_lvm_uuid": "f6b2ea8b-e42f-5ec6-b7af-dc106d037603" 2026-04-05 00:44:59.058874 | orchestrator |  }, 2026-04-05 00:44:59.058884 | orchestrator |  "sdc": { 2026-04-05 00:44:59.058893 | orchestrator |  "osd_lvm_uuid": "ecfcc343-98df-5597-aad3-97c87b883418" 2026-04-05 00:44:59.058903 | orchestrator |  } 2026-04-05 00:44:59.058913 | orchestrator |  } 2026-04-05 00:44:59.058923 | orchestrator | } 2026-04-05 00:44:59.058933 | orchestrator | 2026-04-05 00:44:59.058942 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:44:59.058963 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.144) 0:00:41.943 ********** 2026-04-05 00:44:59.058973 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.058982 | orchestrator | 2026-04-05 00:44:59.058992 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:44:59.059001 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.138) 0:00:42.081 ********** 2026-04-05 00:44:59.059011 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.059020 | orchestrator | 2026-04-05 00:44:59.059030 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:44:59.059039 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.368) 0:00:42.450 ********** 2026-04-05 00:44:59.059049 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:44:59.059058 | orchestrator | 2026-04-05 00:44:59.059068 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:44:59.059077 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.139) 0:00:42.589 ********** 2026-04-05 00:44:59.059087 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:44:59.059097 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:44:59.059107 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:44:59.059116 | orchestrator |  "sdb": { 2026-04-05 00:44:59.059126 | orchestrator |  "osd_lvm_uuid": "f6b2ea8b-e42f-5ec6-b7af-dc106d037603" 2026-04-05 00:44:59.059136 | orchestrator |  }, 2026-04-05 00:44:59.059145 | orchestrator |  "sdc": { 2026-04-05 00:44:59.059155 | orchestrator |  "osd_lvm_uuid": "ecfcc343-98df-5597-aad3-97c87b883418" 2026-04-05 00:44:59.059164 | orchestrator |  } 2026-04-05 00:44:59.059174 | orchestrator |  }, 2026-04-05 00:44:59.059184 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:44:59.059194 | orchestrator |  { 2026-04-05 00:44:59.059203 | orchestrator |  "data": "osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603", 2026-04-05 00:44:59.059213 | orchestrator |  "data_vg": "ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603" 2026-04-05 00:44:59.059222 | orchestrator |  }, 2026-04-05 00:44:59.059232 | orchestrator |  { 2026-04-05 00:44:59.059246 | orchestrator |  "data": "osd-block-ecfcc343-98df-5597-aad3-97c87b883418", 2026-04-05 00:44:59.059256 | orchestrator |  "data_vg": "ceph-ecfcc343-98df-5597-aad3-97c87b883418" 2026-04-05 00:44:59.059266 | orchestrator |  } 2026-04-05 00:44:59.059275 | orchestrator |  ] 2026-04-05 00:44:59.059285 | orchestrator |  } 2026-04-05 00:44:59.059294 | orchestrator | } 2026-04-05 00:44:59.059304 | orchestrator | 2026-04-05 00:44:59.059313 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:44:59.059323 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.218) 0:00:42.808 ********** 2026-04-05 00:44:59.059332 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:59.059342 | orchestrator | 2026-04-05 00:44:59.059352 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:44:59.059361 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:44:59.059373 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:44:59.059383 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:44:59.059392 | orchestrator | 2026-04-05 00:44:59.059402 | orchestrator | 2026-04-05 00:44:59.059411 | orchestrator | 2026-04-05 00:44:59.059421 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:44:59.059431 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:01.161) 0:00:43.970 ********** 2026-04-05 00:44:59.059440 | orchestrator | =============================================================================== 2026-04-05 00:44:59.059456 | orchestrator | Write configuration file ------------------------------------------------ 4.69s 2026-04-05 00:44:59.059466 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2026-04-05 00:44:59.059587 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2026-04-05 00:44:59.059602 | orchestrator | Get initial list of available block devices ----------------------------- 1.15s 2026-04-05 00:44:59.059612 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-04-05 00:44:59.059621 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2026-04-05 00:44:59.059631 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-04-05 00:44:59.059640 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-04-05 00:44:59.059650 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-05 00:44:59.059659 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-05 00:44:59.059669 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.74s 2026-04-05 00:44:59.059678 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-04-05 00:44:59.059688 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.69s 2026-04-05 00:44:59.059707 | orchestrator | Print configuration data ------------------------------------------------ 0.68s 2026-04-05 00:44:59.436978 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-04-05 00:44:59.437078 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-04-05 00:44:59.437113 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2026-04-05 00:44:59.437125 | orchestrator | Print DB devices -------------------------------------------------------- 0.64s 2026-04-05 00:44:59.437134 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-05 00:44:59.437143 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-04-05 00:45:21.240136 | orchestrator | 2026-04-05 00:45:21 | INFO  | Task 4449ea18-ab86-4641-aaf0-448983d3be76 (sync inventory) is running in background. Output coming soon. 2026-04-05 00:45:52.978846 | orchestrator | 2026-04-05 00:45:22 | INFO  | Starting group_vars file reorganization 2026-04-05 00:45:52.978956 | orchestrator | 2026-04-05 00:45:22 | INFO  | Moved 0 file(s) to their respective directories 2026-04-05 00:45:52.978972 | orchestrator | 2026-04-05 00:45:22 | INFO  | Group_vars file reorganization completed 2026-04-05 00:45:52.978985 | orchestrator | 2026-04-05 00:45:25 | INFO  | Starting variable preparation from inventory 2026-04-05 00:45:52.978997 | orchestrator | 2026-04-05 00:45:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-05 00:45:52.979008 | orchestrator | 2026-04-05 00:45:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-05 00:45:52.979041 | orchestrator | 2026-04-05 00:45:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-05 00:45:52.979053 | orchestrator | 2026-04-05 00:45:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-05 00:45:52.979064 | orchestrator | 2026-04-05 00:45:28 | INFO  | Variable preparation completed 2026-04-05 00:45:52.979075 | orchestrator | 2026-04-05 00:45:30 | INFO  | Starting inventory overwrite handling 2026-04-05 00:45:52.979086 | orchestrator | 2026-04-05 00:45:30 | INFO  | Handling group overwrites in 99-overwrite 2026-04-05 00:45:52.979097 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removing group frr:children from 60-generic 2026-04-05 00:45:52.979108 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-05 00:45:52.979146 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-05 00:45:52.979158 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-05 00:45:52.979169 | orchestrator | 2026-04-05 00:45:30 | INFO  | Handling group overwrites in 20-roles 2026-04-05 00:45:52.979180 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-05 00:45:52.979191 | orchestrator | 2026-04-05 00:45:30 | INFO  | Removed 5 group(s) in total 2026-04-05 00:45:52.979201 | orchestrator | 2026-04-05 00:45:30 | INFO  | Inventory overwrite handling completed 2026-04-05 00:45:52.979212 | orchestrator | 2026-04-05 00:45:31 | INFO  | Starting merge of inventory files 2026-04-05 00:45:52.979223 | orchestrator | 2026-04-05 00:45:31 | INFO  | Inventory files merged successfully 2026-04-05 00:45:52.979234 | orchestrator | 2026-04-05 00:45:36 | INFO  | Generating minified hosts file 2026-04-05 00:45:52.979245 | orchestrator | 2026-04-05 00:45:37 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-05 00:45:52.979257 | orchestrator | 2026-04-05 00:45:37 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-05 00:45:52.979268 | orchestrator | 2026-04-05 00:45:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-05 00:45:52.979278 | orchestrator | 2026-04-05 00:45:51 | INFO  | Successfully wrote ClusterShell configuration 2026-04-05 00:45:52.979290 | orchestrator | [master fdcddaa] 2026-04-05-00-45 2026-04-05 00:45:52.979302 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-05 00:45:52.979314 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-05 00:45:52.979325 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-05 00:45:52.979336 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-05 00:45:54.545054 | orchestrator | 2026-04-05 00:45:54 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-05 00:45:54.612159 | orchestrator | 2026-04-05 00:45:54 | INFO  | Task 67c447d7-463f-4829-bfef-92006769abf4 (ceph-create-lvm-devices) was prepared for execution. 2026-04-05 00:45:54.612543 | orchestrator | 2026-04-05 00:45:54 | INFO  | It takes a moment until task 67c447d7-463f-4829-bfef-92006769abf4 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-05 00:46:06.451832 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:46:06.451974 | orchestrator | 2.16.14 2026-04-05 00:46:06.452004 | orchestrator | 2026-04-05 00:46:06.452026 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:46:06.452048 | orchestrator | 2026-04-05 00:46:06.452068 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:46:06.452088 | orchestrator | Sunday 05 April 2026 00:45:58 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-04-05 00:46:06.452108 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:46:06.452128 | orchestrator | 2026-04-05 00:46:06.452146 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:46:06.452166 | orchestrator | Sunday 05 April 2026 00:45:58 +0000 (0:00:00.211) 0:00:00.472 ********** 2026-04-05 00:46:06.452186 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:06.452206 | orchestrator | 2026-04-05 00:46:06.452224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452245 | orchestrator | Sunday 05 April 2026 00:45:59 +0000 (0:00:00.210) 0:00:00.683 ********** 2026-04-05 00:46:06.452264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:46:06.452314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:46:06.452335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:46:06.452353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:46:06.452373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:46:06.452392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:46:06.452412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:46:06.452431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:46:06.452451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:46:06.452469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:46:06.452488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:46:06.452537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:46:06.452557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:46:06.452575 | orchestrator | 2026-04-05 00:46:06.452595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452616 | orchestrator | Sunday 05 April 2026 00:45:59 +0000 (0:00:00.383) 0:00:01.066 ********** 2026-04-05 00:46:06.452635 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.452652 | orchestrator | 2026-04-05 00:46:06.452666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452680 | orchestrator | Sunday 05 April 2026 00:45:59 +0000 (0:00:00.393) 0:00:01.460 ********** 2026-04-05 00:46:06.452693 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.452705 | orchestrator | 2026-04-05 00:46:06.452722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452741 | orchestrator | Sunday 05 April 2026 00:46:00 +0000 (0:00:00.186) 0:00:01.646 ********** 2026-04-05 00:46:06.452759 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.452786 | orchestrator | 2026-04-05 00:46:06.452806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452825 | orchestrator | Sunday 05 April 2026 00:46:00 +0000 (0:00:00.241) 0:00:01.888 ********** 2026-04-05 00:46:06.452847 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.452873 | orchestrator | 2026-04-05 00:46:06.452891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452910 | orchestrator | Sunday 05 April 2026 00:46:00 +0000 (0:00:00.208) 0:00:02.096 ********** 2026-04-05 00:46:06.452928 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.452946 | orchestrator | 2026-04-05 00:46:06.452964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.452983 | orchestrator | Sunday 05 April 2026 00:46:00 +0000 (0:00:00.204) 0:00:02.301 ********** 2026-04-05 00:46:06.453002 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.453020 | orchestrator | 2026-04-05 00:46:06.453038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453057 | orchestrator | Sunday 05 April 2026 00:46:01 +0000 (0:00:00.211) 0:00:02.513 ********** 2026-04-05 00:46:06.453075 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.453094 | orchestrator | 2026-04-05 00:46:06.453113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453131 | orchestrator | Sunday 05 April 2026 00:46:01 +0000 (0:00:00.223) 0:00:02.736 ********** 2026-04-05 00:46:06.453150 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.453168 | orchestrator | 2026-04-05 00:46:06.453186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453219 | orchestrator | Sunday 05 April 2026 00:46:01 +0000 (0:00:00.206) 0:00:02.942 ********** 2026-04-05 00:46:06.453260 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886) 2026-04-05 00:46:06.453281 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886) 2026-04-05 00:46:06.453299 | orchestrator | 2026-04-05 00:46:06.453319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453363 | orchestrator | Sunday 05 April 2026 00:46:01 +0000 (0:00:00.418) 0:00:03.360 ********** 2026-04-05 00:46:06.453384 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087) 2026-04-05 00:46:06.453403 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087) 2026-04-05 00:46:06.453422 | orchestrator | 2026-04-05 00:46:06.453439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453458 | orchestrator | Sunday 05 April 2026 00:46:02 +0000 (0:00:00.397) 0:00:03.758 ********** 2026-04-05 00:46:06.453477 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966) 2026-04-05 00:46:06.453536 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966) 2026-04-05 00:46:06.453555 | orchestrator | 2026-04-05 00:46:06.453572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453591 | orchestrator | Sunday 05 April 2026 00:46:02 +0000 (0:00:00.642) 0:00:04.400 ********** 2026-04-05 00:46:06.453609 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30) 2026-04-05 00:46:06.453628 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30) 2026-04-05 00:46:06.453647 | orchestrator | 2026-04-05 00:46:06.453666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:06.453684 | orchestrator | Sunday 05 April 2026 00:46:03 +0000 (0:00:00.728) 0:00:05.128 ********** 2026-04-05 00:46:06.453704 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:46:06.453722 | orchestrator | 2026-04-05 00:46:06.453741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.453759 | orchestrator | Sunday 05 April 2026 00:46:04 +0000 (0:00:00.854) 0:00:05.983 ********** 2026-04-05 00:46:06.453786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:46:06.453806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:46:06.453826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:46:06.453845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:46:06.453863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:46:06.453882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:46:06.453900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:46:06.453919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:46:06.453937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:46:06.453955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:46:06.453973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:46:06.453993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:46:06.454104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:46:06.454129 | orchestrator | 2026-04-05 00:46:06.454140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454149 | orchestrator | Sunday 05 April 2026 00:46:04 +0000 (0:00:00.441) 0:00:06.424 ********** 2026-04-05 00:46:06.454159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454168 | orchestrator | 2026-04-05 00:46:06.454178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454187 | orchestrator | Sunday 05 April 2026 00:46:05 +0000 (0:00:00.226) 0:00:06.650 ********** 2026-04-05 00:46:06.454197 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454207 | orchestrator | 2026-04-05 00:46:06.454216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454226 | orchestrator | Sunday 05 April 2026 00:46:05 +0000 (0:00:00.204) 0:00:06.855 ********** 2026-04-05 00:46:06.454235 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454245 | orchestrator | 2026-04-05 00:46:06.454254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454264 | orchestrator | Sunday 05 April 2026 00:46:05 +0000 (0:00:00.242) 0:00:07.098 ********** 2026-04-05 00:46:06.454274 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454283 | orchestrator | 2026-04-05 00:46:06.454293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454302 | orchestrator | Sunday 05 April 2026 00:46:05 +0000 (0:00:00.192) 0:00:07.290 ********** 2026-04-05 00:46:06.454312 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454321 | orchestrator | 2026-04-05 00:46:06.454331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454341 | orchestrator | Sunday 05 April 2026 00:46:06 +0000 (0:00:00.221) 0:00:07.512 ********** 2026-04-05 00:46:06.454350 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454360 | orchestrator | 2026-04-05 00:46:06.454369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:06.454379 | orchestrator | Sunday 05 April 2026 00:46:06 +0000 (0:00:00.178) 0:00:07.691 ********** 2026-04-05 00:46:06.454389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:06.454398 | orchestrator | 2026-04-05 00:46:06.454419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203343 | orchestrator | Sunday 05 April 2026 00:46:06 +0000 (0:00:00.228) 0:00:07.919 ********** 2026-04-05 00:46:15.203475 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.203568 | orchestrator | 2026-04-05 00:46:15.203588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203604 | orchestrator | Sunday 05 April 2026 00:46:06 +0000 (0:00:00.206) 0:00:08.126 ********** 2026-04-05 00:46:15.203622 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:46:15.203641 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:46:15.203659 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:46:15.203676 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:46:15.203695 | orchestrator | 2026-04-05 00:46:15.203714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203731 | orchestrator | Sunday 05 April 2026 00:46:07 +0000 (0:00:01.215) 0:00:09.342 ********** 2026-04-05 00:46:15.203751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.203768 | orchestrator | 2026-04-05 00:46:15.203785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203803 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.225) 0:00:09.567 ********** 2026-04-05 00:46:15.203822 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.203839 | orchestrator | 2026-04-05 00:46:15.203858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203877 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.225) 0:00:09.793 ********** 2026-04-05 00:46:15.203928 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.203949 | orchestrator | 2026-04-05 00:46:15.203966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:15.203986 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.229) 0:00:10.023 ********** 2026-04-05 00:46:15.204004 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204021 | orchestrator | 2026-04-05 00:46:15.204040 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:46:15.204059 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.243) 0:00:10.266 ********** 2026-04-05 00:46:15.204079 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204099 | orchestrator | 2026-04-05 00:46:15.204117 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:46:15.204134 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.144) 0:00:10.411 ********** 2026-04-05 00:46:15.204154 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '157b1f80-825d-547a-87b1-b4c204357e87'}}) 2026-04-05 00:46:15.204172 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b6d430e-d9c3-5542-869b-9d02c8b92670'}}) 2026-04-05 00:46:15.204189 | orchestrator | 2026-04-05 00:46:15.204208 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:46:15.204227 | orchestrator | Sunday 05 April 2026 00:46:09 +0000 (0:00:00.254) 0:00:10.666 ********** 2026-04-05 00:46:15.204247 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'}) 2026-04-05 00:46:15.204268 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'}) 2026-04-05 00:46:15.204285 | orchestrator | 2026-04-05 00:46:15.204303 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:46:15.204322 | orchestrator | Sunday 05 April 2026 00:46:11 +0000 (0:00:02.155) 0:00:12.821 ********** 2026-04-05 00:46:15.204340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.204360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.204379 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204398 | orchestrator | 2026-04-05 00:46:15.204417 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:46:15.204436 | orchestrator | Sunday 05 April 2026 00:46:11 +0000 (0:00:00.199) 0:00:13.021 ********** 2026-04-05 00:46:15.204454 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'}) 2026-04-05 00:46:15.204472 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'}) 2026-04-05 00:46:15.204523 | orchestrator | 2026-04-05 00:46:15.204544 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:46:15.204562 | orchestrator | Sunday 05 April 2026 00:46:13 +0000 (0:00:01.476) 0:00:14.498 ********** 2026-04-05 00:46:15.204581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.204601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.204619 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204630 | orchestrator | 2026-04-05 00:46:15.204641 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:46:15.204669 | orchestrator | Sunday 05 April 2026 00:46:13 +0000 (0:00:00.151) 0:00:14.649 ********** 2026-04-05 00:46:15.204705 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204717 | orchestrator | 2026-04-05 00:46:15.204728 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:46:15.204757 | orchestrator | Sunday 05 April 2026 00:46:13 +0000 (0:00:00.204) 0:00:14.854 ********** 2026-04-05 00:46:15.204769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.204780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.204797 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204815 | orchestrator | 2026-04-05 00:46:15.204834 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:46:15.204852 | orchestrator | Sunday 05 April 2026 00:46:13 +0000 (0:00:00.393) 0:00:15.248 ********** 2026-04-05 00:46:15.204870 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204888 | orchestrator | 2026-04-05 00:46:15.204906 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:46:15.204922 | orchestrator | Sunday 05 April 2026 00:46:13 +0000 (0:00:00.143) 0:00:15.391 ********** 2026-04-05 00:46:15.204938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.204956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.204975 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.204992 | orchestrator | 2026-04-05 00:46:15.205018 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:46:15.205035 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.164) 0:00:15.555 ********** 2026-04-05 00:46:15.205052 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205069 | orchestrator | 2026-04-05 00:46:15.205087 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:46:15.205106 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.151) 0:00:15.707 ********** 2026-04-05 00:46:15.205123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.205142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.205162 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205181 | orchestrator | 2026-04-05 00:46:15.205199 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:46:15.205218 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.174) 0:00:15.881 ********** 2026-04-05 00:46:15.205237 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:15.205256 | orchestrator | 2026-04-05 00:46:15.205274 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:46:15.205292 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.147) 0:00:16.029 ********** 2026-04-05 00:46:15.205311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.205324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.205335 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205346 | orchestrator | 2026-04-05 00:46:15.205357 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:46:15.205368 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.201) 0:00:16.230 ********** 2026-04-05 00:46:15.205391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.205402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.205413 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205423 | orchestrator | 2026-04-05 00:46:15.205434 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:46:15.205445 | orchestrator | Sunday 05 April 2026 00:46:14 +0000 (0:00:00.155) 0:00:16.386 ********** 2026-04-05 00:46:15.205456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:15.205466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:15.205477 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205488 | orchestrator | 2026-04-05 00:46:15.205559 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:46:15.205571 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.151) 0:00:16.537 ********** 2026-04-05 00:46:15.205582 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:15.205593 | orchestrator | 2026-04-05 00:46:15.205604 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:46:15.205628 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.135) 0:00:16.672 ********** 2026-04-05 00:46:21.716769 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.716864 | orchestrator | 2026-04-05 00:46:21.716877 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:46:21.716887 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.136) 0:00:16.809 ********** 2026-04-05 00:46:21.716896 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.716903 | orchestrator | 2026-04-05 00:46:21.716911 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:46:21.716918 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.128) 0:00:16.937 ********** 2026-04-05 00:46:21.716925 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:21.716932 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:46:21.716940 | orchestrator | } 2026-04-05 00:46:21.716947 | orchestrator | 2026-04-05 00:46:21.716955 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:46:21.716962 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.375) 0:00:17.312 ********** 2026-04-05 00:46:21.716969 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:21.716976 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:46:21.716983 | orchestrator | } 2026-04-05 00:46:21.716990 | orchestrator | 2026-04-05 00:46:21.716998 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:46:21.717004 | orchestrator | Sunday 05 April 2026 00:46:15 +0000 (0:00:00.152) 0:00:17.465 ********** 2026-04-05 00:46:21.717011 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:21.717017 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:46:21.717024 | orchestrator | } 2026-04-05 00:46:21.717030 | orchestrator | 2026-04-05 00:46:21.717038 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:46:21.717045 | orchestrator | Sunday 05 April 2026 00:46:16 +0000 (0:00:00.141) 0:00:17.607 ********** 2026-04-05 00:46:21.717051 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:21.717058 | orchestrator | 2026-04-05 00:46:21.717079 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:46:21.717087 | orchestrator | Sunday 05 April 2026 00:46:16 +0000 (0:00:00.646) 0:00:18.253 ********** 2026-04-05 00:46:21.717094 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:21.717120 | orchestrator | 2026-04-05 00:46:21.717128 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:46:21.717135 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.507) 0:00:18.761 ********** 2026-04-05 00:46:21.717142 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:21.717149 | orchestrator | 2026-04-05 00:46:21.717155 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:46:21.717161 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.524) 0:00:19.286 ********** 2026-04-05 00:46:21.717167 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:21.717173 | orchestrator | 2026-04-05 00:46:21.717179 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:46:21.717185 | orchestrator | Sunday 05 April 2026 00:46:17 +0000 (0:00:00.155) 0:00:19.442 ********** 2026-04-05 00:46:21.717191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717198 | orchestrator | 2026-04-05 00:46:21.717205 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:46:21.717212 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.128) 0:00:19.571 ********** 2026-04-05 00:46:21.717219 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717226 | orchestrator | 2026-04-05 00:46:21.717233 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:46:21.717241 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.102) 0:00:19.674 ********** 2026-04-05 00:46:21.717248 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:21.717255 | orchestrator |  "vgs_report": { 2026-04-05 00:46:21.717262 | orchestrator |  "vg": [] 2026-04-05 00:46:21.717270 | orchestrator |  } 2026-04-05 00:46:21.717277 | orchestrator | } 2026-04-05 00:46:21.717285 | orchestrator | 2026-04-05 00:46:21.717292 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:46:21.717299 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.144) 0:00:19.818 ********** 2026-04-05 00:46:21.717306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717313 | orchestrator | 2026-04-05 00:46:21.717320 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:46:21.717328 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.133) 0:00:19.952 ********** 2026-04-05 00:46:21.717336 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717344 | orchestrator | 2026-04-05 00:46:21.717352 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:46:21.717360 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.136) 0:00:20.088 ********** 2026-04-05 00:46:21.717368 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717376 | orchestrator | 2026-04-05 00:46:21.717384 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:46:21.717392 | orchestrator | Sunday 05 April 2026 00:46:18 +0000 (0:00:00.136) 0:00:20.225 ********** 2026-04-05 00:46:21.717400 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717407 | orchestrator | 2026-04-05 00:46:21.717414 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:46:21.717421 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.371) 0:00:20.597 ********** 2026-04-05 00:46:21.717427 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717433 | orchestrator | 2026-04-05 00:46:21.717440 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:46:21.717447 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.195) 0:00:20.792 ********** 2026-04-05 00:46:21.717454 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717461 | orchestrator | 2026-04-05 00:46:21.717468 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:46:21.717476 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.134) 0:00:20.927 ********** 2026-04-05 00:46:21.717483 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717490 | orchestrator | 2026-04-05 00:46:21.717519 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:46:21.717531 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.134) 0:00:21.061 ********** 2026-04-05 00:46:21.717553 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717560 | orchestrator | 2026-04-05 00:46:21.717567 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:46:21.717574 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.139) 0:00:21.201 ********** 2026-04-05 00:46:21.717581 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717588 | orchestrator | 2026-04-05 00:46:21.717595 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:46:21.717602 | orchestrator | Sunday 05 April 2026 00:46:19 +0000 (0:00:00.147) 0:00:21.348 ********** 2026-04-05 00:46:21.717609 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717616 | orchestrator | 2026-04-05 00:46:21.717623 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:46:21.717631 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.138) 0:00:21.487 ********** 2026-04-05 00:46:21.717638 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717645 | orchestrator | 2026-04-05 00:46:21.717652 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:46:21.717660 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.132) 0:00:21.620 ********** 2026-04-05 00:46:21.717667 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717674 | orchestrator | 2026-04-05 00:46:21.717681 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:46:21.717688 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.139) 0:00:21.759 ********** 2026-04-05 00:46:21.717697 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717704 | orchestrator | 2026-04-05 00:46:21.717711 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:46:21.717717 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.154) 0:00:21.914 ********** 2026-04-05 00:46:21.717723 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717729 | orchestrator | 2026-04-05 00:46:21.717735 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:46:21.717743 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.157) 0:00:22.072 ********** 2026-04-05 00:46:21.717751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:21.717768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:21.717775 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717782 | orchestrator | 2026-04-05 00:46:21.717790 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:46:21.717797 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:00.158) 0:00:22.230 ********** 2026-04-05 00:46:21.717804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:21.717811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:21.717819 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717826 | orchestrator | 2026-04-05 00:46:21.717833 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:46:21.717840 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.380) 0:00:22.610 ********** 2026-04-05 00:46:21.717848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:21.717855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:21.717868 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717875 | orchestrator | 2026-04-05 00:46:21.717881 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:46:21.717888 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.175) 0:00:22.786 ********** 2026-04-05 00:46:21.717895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:21.717903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:21.717910 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717917 | orchestrator | 2026-04-05 00:46:21.717925 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:46:21.717932 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.161) 0:00:22.947 ********** 2026-04-05 00:46:21.717939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:21.717946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:21.717954 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:21.717961 | orchestrator | 2026-04-05 00:46:21.717968 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:46:21.717975 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.173) 0:00:23.121 ********** 2026-04-05 00:46:21.717988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248561 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248571 | orchestrator | 2026-04-05 00:46:27.248577 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:46:27.248583 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.176) 0:00:23.298 ********** 2026-04-05 00:46:27.248588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248597 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248601 | orchestrator | 2026-04-05 00:46:27.248606 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:46:27.248610 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:00.164) 0:00:23.463 ********** 2026-04-05 00:46:27.248614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248633 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248637 | orchestrator | 2026-04-05 00:46:27.248641 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:46:27.248645 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.177) 0:00:23.641 ********** 2026-04-05 00:46:27.248649 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:27.248654 | orchestrator | 2026-04-05 00:46:27.248658 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:46:27.248675 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:00.512) 0:00:24.153 ********** 2026-04-05 00:46:27.248681 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:27.248688 | orchestrator | 2026-04-05 00:46:27.248694 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:46:27.248700 | orchestrator | Sunday 05 April 2026 00:46:23 +0000 (0:00:00.511) 0:00:24.665 ********** 2026-04-05 00:46:27.248705 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:46:27.248709 | orchestrator | 2026-04-05 00:46:27.248714 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:46:27.248721 | orchestrator | Sunday 05 April 2026 00:46:23 +0000 (0:00:00.165) 0:00:24.830 ********** 2026-04-05 00:46:27.248725 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'vg_name': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'}) 2026-04-05 00:46:27.248731 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'vg_name': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'}) 2026-04-05 00:46:27.248736 | orchestrator | 2026-04-05 00:46:27.248743 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:46:27.248748 | orchestrator | Sunday 05 April 2026 00:46:23 +0000 (0:00:00.169) 0:00:24.999 ********** 2026-04-05 00:46:27.248752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248756 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248760 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248766 | orchestrator | 2026-04-05 00:46:27.248773 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:46:27.248778 | orchestrator | Sunday 05 April 2026 00:46:23 +0000 (0:00:00.154) 0:00:25.154 ********** 2026-04-05 00:46:27.248782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248786 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248790 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248794 | orchestrator | 2026-04-05 00:46:27.248798 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:46:27.248802 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.447) 0:00:25.602 ********** 2026-04-05 00:46:27.248807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'})  2026-04-05 00:46:27.248814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'})  2026-04-05 00:46:27.248818 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:46:27.248822 | orchestrator | 2026-04-05 00:46:27.248827 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:46:27.248835 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.170) 0:00:25.772 ********** 2026-04-05 00:46:27.248852 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:46:27.248857 | orchestrator |  "lvm_report": { 2026-04-05 00:46:27.248861 | orchestrator |  "lv": [ 2026-04-05 00:46:27.248866 | orchestrator |  { 2026-04-05 00:46:27.248871 | orchestrator |  "lv_name": "osd-block-157b1f80-825d-547a-87b1-b4c204357e87", 2026-04-05 00:46:27.248879 | orchestrator |  "vg_name": "ceph-157b1f80-825d-547a-87b1-b4c204357e87" 2026-04-05 00:46:27.248883 | orchestrator |  }, 2026-04-05 00:46:27.248887 | orchestrator |  { 2026-04-05 00:46:27.248891 | orchestrator |  "lv_name": "osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670", 2026-04-05 00:46:27.248899 | orchestrator |  "vg_name": "ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670" 2026-04-05 00:46:27.248903 | orchestrator |  } 2026-04-05 00:46:27.248907 | orchestrator |  ], 2026-04-05 00:46:27.248912 | orchestrator |  "pv": [ 2026-04-05 00:46:27.248916 | orchestrator |  { 2026-04-05 00:46:27.248920 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:46:27.248925 | orchestrator |  "vg_name": "ceph-157b1f80-825d-547a-87b1-b4c204357e87" 2026-04-05 00:46:27.248932 | orchestrator |  }, 2026-04-05 00:46:27.248937 | orchestrator |  { 2026-04-05 00:46:27.248940 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:46:27.248946 | orchestrator |  "vg_name": "ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670" 2026-04-05 00:46:27.248952 | orchestrator |  } 2026-04-05 00:46:27.248956 | orchestrator |  ] 2026-04-05 00:46:27.248960 | orchestrator |  } 2026-04-05 00:46:27.248964 | orchestrator | } 2026-04-05 00:46:27.248969 | orchestrator | 2026-04-05 00:46:27.248974 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:46:27.248979 | orchestrator | 2026-04-05 00:46:27.248983 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:46:27.248989 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.288) 0:00:26.060 ********** 2026-04-05 00:46:27.248993 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:46:27.248998 | orchestrator | 2026-04-05 00:46:27.249003 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:46:27.249008 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.271) 0:00:26.332 ********** 2026-04-05 00:46:27.249013 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:27.249021 | orchestrator | 2026-04-05 00:46:27.249026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249030 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.245) 0:00:26.577 ********** 2026-04-05 00:46:27.249035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:46:27.249039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:46:27.249044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:46:27.249048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:46:27.249053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:46:27.249057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:46:27.249062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:46:27.249066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:46:27.249071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:46:27.249075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:46:27.249080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:46:27.249085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:46:27.249089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:46:27.249093 | orchestrator | 2026-04-05 00:46:27.249098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249102 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.482) 0:00:27.060 ********** 2026-04-05 00:46:27.249107 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249111 | orchestrator | 2026-04-05 00:46:27.249116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249124 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.211) 0:00:27.272 ********** 2026-04-05 00:46:27.249129 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249133 | orchestrator | 2026-04-05 00:46:27.249138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249143 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:00.188) 0:00:27.460 ********** 2026-04-05 00:46:27.249147 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249152 | orchestrator | 2026-04-05 00:46:27.249157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249161 | orchestrator | Sunday 05 April 2026 00:46:26 +0000 (0:00:00.207) 0:00:27.669 ********** 2026-04-05 00:46:27.249166 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249170 | orchestrator | 2026-04-05 00:46:27.249175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249179 | orchestrator | Sunday 05 April 2026 00:46:26 +0000 (0:00:00.613) 0:00:28.282 ********** 2026-04-05 00:46:27.249212 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249219 | orchestrator | 2026-04-05 00:46:27.249223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:27.249228 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:00.209) 0:00:28.492 ********** 2026-04-05 00:46:27.249250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:27.249326 | orchestrator | 2026-04-05 00:46:27.249335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019371 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:00.223) 0:00:28.716 ********** 2026-04-05 00:46:38.019477 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.019493 | orchestrator | 2026-04-05 00:46:38.019555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019586 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:00.214) 0:00:28.930 ********** 2026-04-05 00:46:38.019598 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.019609 | orchestrator | 2026-04-05 00:46:38.019621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019632 | orchestrator | Sunday 05 April 2026 00:46:27 +0000 (0:00:00.222) 0:00:29.153 ********** 2026-04-05 00:46:38.019643 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869) 2026-04-05 00:46:38.019655 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869) 2026-04-05 00:46:38.019666 | orchestrator | 2026-04-05 00:46:38.019677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019688 | orchestrator | Sunday 05 April 2026 00:46:28 +0000 (0:00:00.423) 0:00:29.576 ********** 2026-04-05 00:46:38.019699 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2) 2026-04-05 00:46:38.019711 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2) 2026-04-05 00:46:38.019722 | orchestrator | 2026-04-05 00:46:38.019733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019748 | orchestrator | Sunday 05 April 2026 00:46:28 +0000 (0:00:00.467) 0:00:30.044 ********** 2026-04-05 00:46:38.019760 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767) 2026-04-05 00:46:38.019771 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767) 2026-04-05 00:46:38.019782 | orchestrator | 2026-04-05 00:46:38.019792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019803 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.441) 0:00:30.486 ********** 2026-04-05 00:46:38.019814 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92) 2026-04-05 00:46:38.019847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92) 2026-04-05 00:46:38.019859 | orchestrator | 2026-04-05 00:46:38.019870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:38.019880 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.478) 0:00:30.964 ********** 2026-04-05 00:46:38.019891 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:46:38.019902 | orchestrator | 2026-04-05 00:46:38.019913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.019923 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:00.338) 0:00:31.303 ********** 2026-04-05 00:46:38.019934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:46:38.019946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:46:38.019970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:46:38.019981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:46:38.019991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:46:38.020002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:46:38.020013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:46:38.020023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:46:38.020035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:46:38.020045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:46:38.020056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:46:38.020067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:46:38.020077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:46:38.020088 | orchestrator | 2026-04-05 00:46:38.020099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020110 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.680) 0:00:31.983 ********** 2026-04-05 00:46:38.020121 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020131 | orchestrator | 2026-04-05 00:46:38.020142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020153 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.226) 0:00:32.210 ********** 2026-04-05 00:46:38.020163 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020174 | orchestrator | 2026-04-05 00:46:38.020185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020196 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:00.222) 0:00:32.433 ********** 2026-04-05 00:46:38.020207 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020218 | orchestrator | 2026-04-05 00:46:38.020247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020258 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.220) 0:00:32.653 ********** 2026-04-05 00:46:38.020269 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020280 | orchestrator | 2026-04-05 00:46:38.020290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020301 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.199) 0:00:32.853 ********** 2026-04-05 00:46:38.020312 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020322 | orchestrator | 2026-04-05 00:46:38.020333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020352 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.238) 0:00:33.092 ********** 2026-04-05 00:46:38.020363 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020374 | orchestrator | 2026-04-05 00:46:38.020384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020395 | orchestrator | Sunday 05 April 2026 00:46:31 +0000 (0:00:00.211) 0:00:33.303 ********** 2026-04-05 00:46:38.020406 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020417 | orchestrator | 2026-04-05 00:46:38.020427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020438 | orchestrator | Sunday 05 April 2026 00:46:32 +0000 (0:00:00.225) 0:00:33.529 ********** 2026-04-05 00:46:38.020449 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020460 | orchestrator | 2026-04-05 00:46:38.020470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020481 | orchestrator | Sunday 05 April 2026 00:46:32 +0000 (0:00:00.225) 0:00:33.754 ********** 2026-04-05 00:46:38.020517 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:46:38.020529 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:46:38.020540 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:46:38.020551 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:46:38.020561 | orchestrator | 2026-04-05 00:46:38.020572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020583 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.871) 0:00:34.626 ********** 2026-04-05 00:46:38.020593 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020604 | orchestrator | 2026-04-05 00:46:38.020615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020625 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.209) 0:00:34.835 ********** 2026-04-05 00:46:38.020636 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020646 | orchestrator | 2026-04-05 00:46:38.020657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020668 | orchestrator | Sunday 05 April 2026 00:46:33 +0000 (0:00:00.210) 0:00:35.046 ********** 2026-04-05 00:46:38.020678 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020689 | orchestrator | 2026-04-05 00:46:38.020700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:38.020710 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.698) 0:00:35.744 ********** 2026-04-05 00:46:38.020721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020731 | orchestrator | 2026-04-05 00:46:38.020742 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:46:38.020753 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.196) 0:00:35.940 ********** 2026-04-05 00:46:38.020763 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020774 | orchestrator | 2026-04-05 00:46:38.020784 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:46:38.020795 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.138) 0:00:36.078 ********** 2026-04-05 00:46:38.020806 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}}) 2026-04-05 00:46:38.020817 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}}) 2026-04-05 00:46:38.020828 | orchestrator | 2026-04-05 00:46:38.020839 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:46:38.020849 | orchestrator | Sunday 05 April 2026 00:46:34 +0000 (0:00:00.210) 0:00:36.289 ********** 2026-04-05 00:46:38.020861 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}) 2026-04-05 00:46:38.020872 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}) 2026-04-05 00:46:38.020890 | orchestrator | 2026-04-05 00:46:38.020901 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:46:38.020912 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:01.850) 0:00:38.140 ********** 2026-04-05 00:46:38.020923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:38.020935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:38.020945 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:38.020956 | orchestrator | 2026-04-05 00:46:38.020967 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:46:38.020977 | orchestrator | Sunday 05 April 2026 00:46:36 +0000 (0:00:00.160) 0:00:38.301 ********** 2026-04-05 00:46:38.020988 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}) 2026-04-05 00:46:38.021005 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}) 2026-04-05 00:46:43.931222 | orchestrator | 2026-04-05 00:46:43.931342 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:46:43.931357 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:01.272) 0:00:39.573 ********** 2026-04-05 00:46:43.931366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.931385 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931394 | orchestrator | 2026-04-05 00:46:43.931405 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:46:43.931419 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.186) 0:00:39.759 ********** 2026-04-05 00:46:43.931432 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931445 | orchestrator | 2026-04-05 00:46:43.931457 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:46:43.931468 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.121) 0:00:39.881 ********** 2026-04-05 00:46:43.931482 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.931585 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931600 | orchestrator | 2026-04-05 00:46:43.931614 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:46:43.931628 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.157) 0:00:40.038 ********** 2026-04-05 00:46:43.931642 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931651 | orchestrator | 2026-04-05 00:46:43.931660 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:46:43.931668 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.145) 0:00:40.184 ********** 2026-04-05 00:46:43.931677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.931694 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931727 | orchestrator | 2026-04-05 00:46:43.931736 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:46:43.931744 | orchestrator | Sunday 05 April 2026 00:46:38 +0000 (0:00:00.159) 0:00:40.343 ********** 2026-04-05 00:46:43.931753 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931762 | orchestrator | 2026-04-05 00:46:43.931773 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:46:43.931782 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.374) 0:00:40.717 ********** 2026-04-05 00:46:43.931792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.931811 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931820 | orchestrator | 2026-04-05 00:46:43.931829 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:46:43.931839 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.166) 0:00:40.884 ********** 2026-04-05 00:46:43.931848 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.931862 | orchestrator | 2026-04-05 00:46:43.931877 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:46:43.931890 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.145) 0:00:41.030 ********** 2026-04-05 00:46:43.931903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.931931 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.931944 | orchestrator | 2026-04-05 00:46:43.931959 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:46:43.931973 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.178) 0:00:41.209 ********** 2026-04-05 00:46:43.931987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.931999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.932010 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932024 | orchestrator | 2026-04-05 00:46:43.932036 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:46:43.932068 | orchestrator | Sunday 05 April 2026 00:46:39 +0000 (0:00:00.158) 0:00:41.367 ********** 2026-04-05 00:46:43.932101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:43.932111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:43.932120 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932128 | orchestrator | 2026-04-05 00:46:43.932136 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:46:43.932144 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.166) 0:00:41.534 ********** 2026-04-05 00:46:43.932152 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932160 | orchestrator | 2026-04-05 00:46:43.932168 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:46:43.932176 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.134) 0:00:41.668 ********** 2026-04-05 00:46:43.932184 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932200 | orchestrator | 2026-04-05 00:46:43.932208 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:46:43.932216 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.141) 0:00:41.810 ********** 2026-04-05 00:46:43.932224 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932231 | orchestrator | 2026-04-05 00:46:43.932244 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:46:43.932252 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.150) 0:00:41.961 ********** 2026-04-05 00:46:43.932261 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:43.932269 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:46:43.932277 | orchestrator | } 2026-04-05 00:46:43.932285 | orchestrator | 2026-04-05 00:46:43.932293 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:46:43.932307 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.154) 0:00:42.115 ********** 2026-04-05 00:46:43.932320 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:43.932333 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:46:43.932346 | orchestrator | } 2026-04-05 00:46:43.932359 | orchestrator | 2026-04-05 00:46:43.932372 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:46:43.932384 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.154) 0:00:42.270 ********** 2026-04-05 00:46:43.932397 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:43.932410 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:46:43.932425 | orchestrator | } 2026-04-05 00:46:43.932438 | orchestrator | 2026-04-05 00:46:43.932452 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:46:43.932465 | orchestrator | Sunday 05 April 2026 00:46:40 +0000 (0:00:00.141) 0:00:42.411 ********** 2026-04-05 00:46:43.932478 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.932491 | orchestrator | 2026-04-05 00:46:43.932528 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:46:43.932541 | orchestrator | Sunday 05 April 2026 00:46:41 +0000 (0:00:00.776) 0:00:43.187 ********** 2026-04-05 00:46:43.932554 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.932567 | orchestrator | 2026-04-05 00:46:43.932581 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:46:43.932594 | orchestrator | Sunday 05 April 2026 00:46:42 +0000 (0:00:00.574) 0:00:43.762 ********** 2026-04-05 00:46:43.932606 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.932618 | orchestrator | 2026-04-05 00:46:43.932631 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:46:43.932643 | orchestrator | Sunday 05 April 2026 00:46:42 +0000 (0:00:00.526) 0:00:44.288 ********** 2026-04-05 00:46:43.932657 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:43.932669 | orchestrator | 2026-04-05 00:46:43.932683 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:46:43.932697 | orchestrator | Sunday 05 April 2026 00:46:42 +0000 (0:00:00.153) 0:00:44.442 ********** 2026-04-05 00:46:43.932705 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932713 | orchestrator | 2026-04-05 00:46:43.932721 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:46:43.932729 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.121) 0:00:44.563 ********** 2026-04-05 00:46:43.932737 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932745 | orchestrator | 2026-04-05 00:46:43.932753 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:46:43.932761 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.117) 0:00:44.681 ********** 2026-04-05 00:46:43.932769 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:43.932777 | orchestrator |  "vgs_report": { 2026-04-05 00:46:43.932785 | orchestrator |  "vg": [] 2026-04-05 00:46:43.932793 | orchestrator |  } 2026-04-05 00:46:43.932801 | orchestrator | } 2026-04-05 00:46:43.932809 | orchestrator | 2026-04-05 00:46:43.932817 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:46:43.932834 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.152) 0:00:44.834 ********** 2026-04-05 00:46:43.932842 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932850 | orchestrator | 2026-04-05 00:46:43.932858 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:46:43.932866 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.137) 0:00:44.971 ********** 2026-04-05 00:46:43.932874 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932882 | orchestrator | 2026-04-05 00:46:43.932890 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:46:43.932898 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.134) 0:00:45.106 ********** 2026-04-05 00:46:43.932905 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932913 | orchestrator | 2026-04-05 00:46:43.932921 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:46:43.932929 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.152) 0:00:45.258 ********** 2026-04-05 00:46:43.932937 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:43.932945 | orchestrator | 2026-04-05 00:46:43.932962 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:46:48.562675 | orchestrator | Sunday 05 April 2026 00:46:43 +0000 (0:00:00.142) 0:00:45.401 ********** 2026-04-05 00:46:48.562873 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.562896 | orchestrator | 2026-04-05 00:46:48.562909 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:46:48.562920 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.143) 0:00:45.545 ********** 2026-04-05 00:46:48.562931 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.562942 | orchestrator | 2026-04-05 00:46:48.562953 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:46:48.562964 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.362) 0:00:45.907 ********** 2026-04-05 00:46:48.562975 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.562985 | orchestrator | 2026-04-05 00:46:48.562996 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:46:48.563007 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.138) 0:00:46.045 ********** 2026-04-05 00:46:48.563017 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563028 | orchestrator | 2026-04-05 00:46:48.563039 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:46:48.563050 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.143) 0:00:46.189 ********** 2026-04-05 00:46:48.563060 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563071 | orchestrator | 2026-04-05 00:46:48.563102 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:46:48.563124 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.153) 0:00:46.342 ********** 2026-04-05 00:46:48.563142 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563162 | orchestrator | 2026-04-05 00:46:48.563181 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:46:48.563199 | orchestrator | Sunday 05 April 2026 00:46:44 +0000 (0:00:00.132) 0:00:46.475 ********** 2026-04-05 00:46:48.563218 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563238 | orchestrator | 2026-04-05 00:46:48.563256 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:46:48.563275 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.140) 0:00:46.616 ********** 2026-04-05 00:46:48.563295 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563316 | orchestrator | 2026-04-05 00:46:48.563337 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:46:48.563358 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.172) 0:00:46.788 ********** 2026-04-05 00:46:48.563382 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563399 | orchestrator | 2026-04-05 00:46:48.563415 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:46:48.563465 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.141) 0:00:46.929 ********** 2026-04-05 00:46:48.563486 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563552 | orchestrator | 2026-04-05 00:46:48.563572 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:46:48.563591 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.140) 0:00:47.070 ********** 2026-04-05 00:46:48.563612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.563631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.563649 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563667 | orchestrator | 2026-04-05 00:46:48.563686 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:46:48.563706 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.147) 0:00:47.217 ********** 2026-04-05 00:46:48.563726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.563746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.563765 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563781 | orchestrator | 2026-04-05 00:46:48.563801 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:46:48.563821 | orchestrator | Sunday 05 April 2026 00:46:45 +0000 (0:00:00.171) 0:00:47.388 ********** 2026-04-05 00:46:48.563840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.563858 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.563876 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.563895 | orchestrator | 2026-04-05 00:46:48.563911 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:46:48.563930 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.147) 0:00:47.535 ********** 2026-04-05 00:46:48.563950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.563968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.563988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564007 | orchestrator | 2026-04-05 00:46:48.564052 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:46:48.564072 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.379) 0:00:47.915 ********** 2026-04-05 00:46:48.564084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.564106 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564117 | orchestrator | 2026-04-05 00:46:48.564127 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:46:48.564140 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.163) 0:00:48.078 ********** 2026-04-05 00:46:48.564159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564206 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.564225 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564240 | orchestrator | 2026-04-05 00:46:48.564256 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:46:48.564272 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.163) 0:00:48.241 ********** 2026-04-05 00:46:48.564289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.564323 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564341 | orchestrator | 2026-04-05 00:46:48.564359 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:46:48.564376 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.168) 0:00:48.410 ********** 2026-04-05 00:46:48.564394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.564431 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564450 | orchestrator | 2026-04-05 00:46:48.564469 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:46:48.564489 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.164) 0:00:48.574 ********** 2026-04-05 00:46:48.564535 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:48.564555 | orchestrator | 2026-04-05 00:46:48.564573 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:46:48.564592 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.471) 0:00:49.046 ********** 2026-04-05 00:46:48.564610 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:48.564629 | orchestrator | 2026-04-05 00:46:48.564648 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:46:48.564666 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.476) 0:00:49.522 ********** 2026-04-05 00:46:48.564685 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:46:48.564704 | orchestrator | 2026-04-05 00:46:48.564722 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:46:48.564738 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.136) 0:00:49.659 ********** 2026-04-05 00:46:48.564750 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'vg_name': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}) 2026-04-05 00:46:48.564762 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'vg_name': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}) 2026-04-05 00:46:48.564773 | orchestrator | 2026-04-05 00:46:48.564784 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:46:48.564795 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.165) 0:00:49.825 ********** 2026-04-05 00:46:48.564806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:48.564828 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:48.564839 | orchestrator | 2026-04-05 00:46:48.564862 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:46:48.564873 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.145) 0:00:49.970 ********** 2026-04-05 00:46:48.564884 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:48.564910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:54.239036 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:54.239140 | orchestrator | 2026-04-05 00:46:54.239163 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:46:54.239179 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.157) 0:00:50.127 ********** 2026-04-05 00:46:54.239194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'})  2026-04-05 00:46:54.239209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'})  2026-04-05 00:46:54.239223 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:46:54.239238 | orchestrator | 2026-04-05 00:46:54.239252 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:46:54.239266 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.165) 0:00:50.293 ********** 2026-04-05 00:46:54.239281 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:46:54.239295 | orchestrator |  "lvm_report": { 2026-04-05 00:46:54.239311 | orchestrator |  "lv": [ 2026-04-05 00:46:54.239324 | orchestrator |  { 2026-04-05 00:46:54.239339 | orchestrator |  "lv_name": "osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff", 2026-04-05 00:46:54.239354 | orchestrator |  "vg_name": "ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff" 2026-04-05 00:46:54.239368 | orchestrator |  }, 2026-04-05 00:46:54.239380 | orchestrator |  { 2026-04-05 00:46:54.239394 | orchestrator |  "lv_name": "osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621", 2026-04-05 00:46:54.239407 | orchestrator |  "vg_name": "ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621" 2026-04-05 00:46:54.239421 | orchestrator |  } 2026-04-05 00:46:54.239435 | orchestrator |  ], 2026-04-05 00:46:54.239449 | orchestrator |  "pv": [ 2026-04-05 00:46:54.239462 | orchestrator |  { 2026-04-05 00:46:54.239476 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:46:54.239490 | orchestrator |  "vg_name": "ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff" 2026-04-05 00:46:54.239564 | orchestrator |  }, 2026-04-05 00:46:54.239575 | orchestrator |  { 2026-04-05 00:46:54.239584 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:46:54.239594 | orchestrator |  "vg_name": "ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621" 2026-04-05 00:46:54.239641 | orchestrator |  } 2026-04-05 00:46:54.239651 | orchestrator |  ] 2026-04-05 00:46:54.239661 | orchestrator |  } 2026-04-05 00:46:54.239670 | orchestrator | } 2026-04-05 00:46:54.239680 | orchestrator | 2026-04-05 00:46:54.239689 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:46:54.239699 | orchestrator | 2026-04-05 00:46:54.239708 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:46:54.239717 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.457) 0:00:50.750 ********** 2026-04-05 00:46:54.239727 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:46:54.239736 | orchestrator | 2026-04-05 00:46:54.239745 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:46:54.239755 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.228) 0:00:50.979 ********** 2026-04-05 00:46:54.239765 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:46:54.239790 | orchestrator | 2026-04-05 00:46:54.239799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.239806 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.212) 0:00:51.191 ********** 2026-04-05 00:46:54.239814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:46:54.239822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:46:54.239830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:46:54.239838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:46:54.239849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:46:54.239857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:46:54.239865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:46:54.239873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:46:54.239881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:46:54.239889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:46:54.239896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:46:54.239904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:46:54.239912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:46:54.239919 | orchestrator | 2026-04-05 00:46:54.239927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.239935 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:00.388) 0:00:51.580 ********** 2026-04-05 00:46:54.239943 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.239950 | orchestrator | 2026-04-05 00:46:54.239958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.239966 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:00.169) 0:00:51.750 ********** 2026-04-05 00:46:54.239974 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.239982 | orchestrator | 2026-04-05 00:46:54.239990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240014 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:00.183) 0:00:51.934 ********** 2026-04-05 00:46:54.240023 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240030 | orchestrator | 2026-04-05 00:46:54.240038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240046 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:00.196) 0:00:52.130 ********** 2026-04-05 00:46:54.240054 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240062 | orchestrator | 2026-04-05 00:46:54.240070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240077 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:00.187) 0:00:52.318 ********** 2026-04-05 00:46:54.240085 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240093 | orchestrator | 2026-04-05 00:46:54.240101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240109 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:00.193) 0:00:52.512 ********** 2026-04-05 00:46:54.240117 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240124 | orchestrator | 2026-04-05 00:46:54.240132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240140 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:00.461) 0:00:52.973 ********** 2026-04-05 00:46:54.240151 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240159 | orchestrator | 2026-04-05 00:46:54.240173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240180 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:00.184) 0:00:53.158 ********** 2026-04-05 00:46:54.240188 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:46:54.240196 | orchestrator | 2026-04-05 00:46:54.240204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240212 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:00.183) 0:00:53.341 ********** 2026-04-05 00:46:54.240219 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0) 2026-04-05 00:46:54.240228 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0) 2026-04-05 00:46:54.240235 | orchestrator | 2026-04-05 00:46:54.240243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240251 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:00.452) 0:00:53.794 ********** 2026-04-05 00:46:54.240259 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9) 2026-04-05 00:46:54.240267 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9) 2026-04-05 00:46:54.240274 | orchestrator | 2026-04-05 00:46:54.240282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240290 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:00.453) 0:00:54.247 ********** 2026-04-05 00:46:54.240298 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304) 2026-04-05 00:46:54.240306 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304) 2026-04-05 00:46:54.240313 | orchestrator | 2026-04-05 00:46:54.240321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240329 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.394) 0:00:54.641 ********** 2026-04-05 00:46:54.240337 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d) 2026-04-05 00:46:54.240344 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d) 2026-04-05 00:46:54.240352 | orchestrator | 2026-04-05 00:46:54.240360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:46:54.240368 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.413) 0:00:55.055 ********** 2026-04-05 00:46:54.240376 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:46:54.240387 | orchestrator | 2026-04-05 00:46:54.240401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:46:54.240414 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.355) 0:00:55.411 ********** 2026-04-05 00:46:54.240428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:46:54.240438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:46:54.240446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:46:54.240454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:46:54.240462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:46:54.240470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:46:54.240478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:46:54.240486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:46:54.240493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:46:54.240537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:46:54.240552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:46:54.240566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:47:02.358915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:47:02.359028 | orchestrator | 2026-04-05 00:47:02.359054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359067 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.383) 0:00:55.795 ********** 2026-04-05 00:47:02.359078 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359090 | orchestrator | 2026-04-05 00:47:02.359101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359112 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.203) 0:00:55.998 ********** 2026-04-05 00:47:02.359123 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359134 | orchestrator | 2026-04-05 00:47:02.359145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359156 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.181) 0:00:56.180 ********** 2026-04-05 00:47:02.359167 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359178 | orchestrator | 2026-04-05 00:47:02.359188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359215 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.511) 0:00:56.691 ********** 2026-04-05 00:47:02.359226 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359237 | orchestrator | 2026-04-05 00:47:02.359248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359259 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.182) 0:00:56.874 ********** 2026-04-05 00:47:02.359270 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359281 | orchestrator | 2026-04-05 00:47:02.359291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359302 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.184) 0:00:57.058 ********** 2026-04-05 00:47:02.359313 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359324 | orchestrator | 2026-04-05 00:47:02.359335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359346 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.252) 0:00:57.311 ********** 2026-04-05 00:47:02.359357 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359368 | orchestrator | 2026-04-05 00:47:02.359379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359390 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:00.187) 0:00:57.498 ********** 2026-04-05 00:47:02.359401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359412 | orchestrator | 2026-04-05 00:47:02.359423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359434 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:00.211) 0:00:57.710 ********** 2026-04-05 00:47:02.359445 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:47:02.359457 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:47:02.359468 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:47:02.359482 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:47:02.359495 | orchestrator | 2026-04-05 00:47:02.359582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359601 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:00.602) 0:00:58.312 ********** 2026-04-05 00:47:02.359620 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359640 | orchestrator | 2026-04-05 00:47:02.359661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359680 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.189) 0:00:58.502 ********** 2026-04-05 00:47:02.359721 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359735 | orchestrator | 2026-04-05 00:47:02.359748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359760 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.191) 0:00:58.693 ********** 2026-04-05 00:47:02.359774 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359787 | orchestrator | 2026-04-05 00:47:02.359799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:47:02.359812 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.180) 0:00:58.873 ********** 2026-04-05 00:47:02.359826 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359846 | orchestrator | 2026-04-05 00:47:02.359870 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:47:02.359895 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.176) 0:00:59.050 ********** 2026-04-05 00:47:02.359913 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.359930 | orchestrator | 2026-04-05 00:47:02.359947 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:47:02.359965 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:00.123) 0:00:59.173 ********** 2026-04-05 00:47:02.359985 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}}) 2026-04-05 00:47:02.360004 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecfcc343-98df-5597-aad3-97c87b883418'}}) 2026-04-05 00:47:02.360022 | orchestrator | 2026-04-05 00:47:02.360041 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:47:02.360053 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.314) 0:00:59.488 ********** 2026-04-05 00:47:02.360065 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}) 2026-04-05 00:47:02.360077 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'}) 2026-04-05 00:47:02.360088 | orchestrator | 2026-04-05 00:47:02.360100 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:47:02.360128 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:01.827) 0:01:01.315 ********** 2026-04-05 00:47:02.360140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:02.360152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:02.360162 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360173 | orchestrator | 2026-04-05 00:47:02.360184 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:47:02.360195 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:00.143) 0:01:01.459 ********** 2026-04-05 00:47:02.360206 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}) 2026-04-05 00:47:02.360217 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'}) 2026-04-05 00:47:02.360228 | orchestrator | 2026-04-05 00:47:02.360238 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:47:02.360249 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:01.208) 0:01:02.668 ********** 2026-04-05 00:47:02.360260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:02.360271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:02.360292 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360303 | orchestrator | 2026-04-05 00:47:02.360314 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:47:02.360325 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.148) 0:01:02.817 ********** 2026-04-05 00:47:02.360336 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360346 | orchestrator | 2026-04-05 00:47:02.360357 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:47:02.360368 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.130) 0:01:02.947 ********** 2026-04-05 00:47:02.360379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:02.360390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:02.360401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360412 | orchestrator | 2026-04-05 00:47:02.360422 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:47:02.360433 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.150) 0:01:03.097 ********** 2026-04-05 00:47:02.360444 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360454 | orchestrator | 2026-04-05 00:47:02.360465 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:47:02.360476 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.135) 0:01:03.232 ********** 2026-04-05 00:47:02.360487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:02.360517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:02.360528 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360539 | orchestrator | 2026-04-05 00:47:02.360550 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:47:02.360561 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:00.141) 0:01:03.374 ********** 2026-04-05 00:47:02.360571 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360582 | orchestrator | 2026-04-05 00:47:02.360593 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:47:02.360604 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.130) 0:01:03.504 ********** 2026-04-05 00:47:02.360615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:02.360626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:02.360637 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:02.360647 | orchestrator | 2026-04-05 00:47:02.360658 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:47:02.360669 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.132) 0:01:03.636 ********** 2026-04-05 00:47:02.360680 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:02.360691 | orchestrator | 2026-04-05 00:47:02.360701 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:47:02.360712 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.137) 0:01:03.774 ********** 2026-04-05 00:47:02.360730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:08.176624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:08.176735 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.176751 | orchestrator | 2026-04-05 00:47:08.176764 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:47:08.176776 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.289) 0:01:04.063 ********** 2026-04-05 00:47:08.176787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:08.176798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:08.176809 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.176820 | orchestrator | 2026-04-05 00:47:08.176844 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:47:08.176856 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.152) 0:01:04.216 ********** 2026-04-05 00:47:08.176873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:08.176900 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:08.176924 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.176943 | orchestrator | 2026-04-05 00:47:08.176963 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:47:08.176981 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.143) 0:01:04.360 ********** 2026-04-05 00:47:08.177000 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.177018 | orchestrator | 2026-04-05 00:47:08.177038 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:47:08.177058 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:00.117) 0:01:04.477 ********** 2026-04-05 00:47:08.177076 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.177097 | orchestrator | 2026-04-05 00:47:08.177150 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:47:08.177166 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.127) 0:01:04.605 ********** 2026-04-05 00:47:08.177179 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.177191 | orchestrator | 2026-04-05 00:47:08.177205 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:47:08.177218 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.128) 0:01:04.733 ********** 2026-04-05 00:47:08.177230 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:08.177243 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:47:08.177256 | orchestrator | } 2026-04-05 00:47:08.177269 | orchestrator | 2026-04-05 00:47:08.177281 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:47:08.177294 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.123) 0:01:04.857 ********** 2026-04-05 00:47:08.177306 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:08.177319 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:47:08.177331 | orchestrator | } 2026-04-05 00:47:08.177344 | orchestrator | 2026-04-05 00:47:08.177356 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:47:08.177368 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.146) 0:01:05.003 ********** 2026-04-05 00:47:08.177382 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:08.177394 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:47:08.177406 | orchestrator | } 2026-04-05 00:47:08.177425 | orchestrator | 2026-04-05 00:47:08.177454 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:47:08.177477 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:00.135) 0:01:05.139 ********** 2026-04-05 00:47:08.177537 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:08.177557 | orchestrator | 2026-04-05 00:47:08.177575 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:47:08.177593 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.495) 0:01:05.635 ********** 2026-04-05 00:47:08.177613 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:08.177632 | orchestrator | 2026-04-05 00:47:08.177651 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:47:08.177671 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:00.501) 0:01:06.137 ********** 2026-04-05 00:47:08.177690 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:08.177709 | orchestrator | 2026-04-05 00:47:08.177727 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:47:08.177748 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.536) 0:01:06.673 ********** 2026-04-05 00:47:08.177766 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:08.177787 | orchestrator | 2026-04-05 00:47:08.177800 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:47:08.177811 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.291) 0:01:06.965 ********** 2026-04-05 00:47:08.177822 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.177833 | orchestrator | 2026-04-05 00:47:08.177844 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:47:08.177855 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.130) 0:01:07.095 ********** 2026-04-05 00:47:08.177866 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.177877 | orchestrator | 2026-04-05 00:47:08.177888 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:47:08.177899 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.111) 0:01:07.207 ********** 2026-04-05 00:47:08.177910 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:08.177922 | orchestrator |  "vgs_report": { 2026-04-05 00:47:08.177933 | orchestrator |  "vg": [] 2026-04-05 00:47:08.177963 | orchestrator |  } 2026-04-05 00:47:08.177975 | orchestrator | } 2026-04-05 00:47:08.177986 | orchestrator | 2026-04-05 00:47:08.177997 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:47:08.178008 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.139) 0:01:07.346 ********** 2026-04-05 00:47:08.178079 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178102 | orchestrator | 2026-04-05 00:47:08.178122 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:47:08.178154 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.119) 0:01:07.465 ********** 2026-04-05 00:47:08.178173 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178191 | orchestrator | 2026-04-05 00:47:08.178209 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:47:08.178228 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.131) 0:01:07.597 ********** 2026-04-05 00:47:08.178247 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178267 | orchestrator | 2026-04-05 00:47:08.178285 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:47:08.178297 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.127) 0:01:07.725 ********** 2026-04-05 00:47:08.178317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178328 | orchestrator | 2026-04-05 00:47:08.178339 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:47:08.178350 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.128) 0:01:07.854 ********** 2026-04-05 00:47:08.178361 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178372 | orchestrator | 2026-04-05 00:47:08.178383 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:47:08.178394 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.127) 0:01:07.981 ********** 2026-04-05 00:47:08.178405 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178416 | orchestrator | 2026-04-05 00:47:08.178426 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:47:08.178447 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.128) 0:01:08.110 ********** 2026-04-05 00:47:08.178458 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178469 | orchestrator | 2026-04-05 00:47:08.178480 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:47:08.178491 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.115) 0:01:08.226 ********** 2026-04-05 00:47:08.178637 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178650 | orchestrator | 2026-04-05 00:47:08.178660 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:47:08.178671 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.138) 0:01:08.364 ********** 2026-04-05 00:47:08.178682 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178693 | orchestrator | 2026-04-05 00:47:08.178704 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:47:08.178715 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.290) 0:01:08.655 ********** 2026-04-05 00:47:08.178726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178736 | orchestrator | 2026-04-05 00:47:08.178747 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:47:08.178758 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.134) 0:01:08.789 ********** 2026-04-05 00:47:08.178769 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178780 | orchestrator | 2026-04-05 00:47:08.178790 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:47:08.178801 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.128) 0:01:08.918 ********** 2026-04-05 00:47:08.178812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178823 | orchestrator | 2026-04-05 00:47:08.178833 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:47:08.178844 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.132) 0:01:09.051 ********** 2026-04-05 00:47:08.178855 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178866 | orchestrator | 2026-04-05 00:47:08.178876 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:47:08.178887 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.119) 0:01:09.170 ********** 2026-04-05 00:47:08.178898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178909 | orchestrator | 2026-04-05 00:47:08.178920 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:47:08.178931 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.112) 0:01:09.284 ********** 2026-04-05 00:47:08.178942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:08.178953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:08.178964 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.178975 | orchestrator | 2026-04-05 00:47:08.178986 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:47:08.178997 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.159) 0:01:09.443 ********** 2026-04-05 00:47:08.179008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:08.179019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:08.179030 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:08.179040 | orchestrator | 2026-04-05 00:47:08.179051 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:47:08.179062 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.144) 0:01:09.588 ********** 2026-04-05 00:47:08.179093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.265816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.265968 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.265997 | orchestrator | 2026-04-05 00:47:11.266088 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:47:11.267048 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.141) 0:01:09.730 ********** 2026-04-05 00:47:11.267087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.267136 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.267156 | orchestrator | 2026-04-05 00:47:11.267175 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:47:11.267195 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.155) 0:01:09.885 ********** 2026-04-05 00:47:11.267215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.267254 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.267272 | orchestrator | 2026-04-05 00:47:11.267290 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:47:11.267302 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.146) 0:01:10.032 ********** 2026-04-05 00:47:11.267313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.267335 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.267346 | orchestrator | 2026-04-05 00:47:11.267357 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:47:11.267368 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:00.169) 0:01:10.201 ********** 2026-04-05 00:47:11.267379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.267401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.267412 | orchestrator | 2026-04-05 00:47:11.267427 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:47:11.267445 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.318) 0:01:10.519 ********** 2026-04-05 00:47:11.267464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.267536 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.267556 | orchestrator | 2026-04-05 00:47:11.267575 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:47:11.267631 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.180) 0:01:10.700 ********** 2026-04-05 00:47:11.267653 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:11.267675 | orchestrator | 2026-04-05 00:47:11.267694 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:47:11.267712 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:00.514) 0:01:11.215 ********** 2026-04-05 00:47:11.267730 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:11.267742 | orchestrator | 2026-04-05 00:47:11.267753 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:47:11.267764 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.548) 0:01:11.764 ********** 2026-04-05 00:47:11.267774 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:11.267785 | orchestrator | 2026-04-05 00:47:11.267796 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:47:11.267813 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.159) 0:01:11.924 ********** 2026-04-05 00:47:11.267832 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'vg_name': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'}) 2026-04-05 00:47:11.267853 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'vg_name': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}) 2026-04-05 00:47:11.267874 | orchestrator | 2026-04-05 00:47:11.267894 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:47:11.267915 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.164) 0:01:12.088 ********** 2026-04-05 00:47:11.267960 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.267982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.268001 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.268012 | orchestrator | 2026-04-05 00:47:11.268023 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:47:11.268034 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.165) 0:01:12.253 ********** 2026-04-05 00:47:11.268045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.268065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.268077 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.268088 | orchestrator | 2026-04-05 00:47:11.268099 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:47:11.268110 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:00.157) 0:01:12.411 ********** 2026-04-05 00:47:11.268121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'})  2026-04-05 00:47:11.268132 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'})  2026-04-05 00:47:11.268143 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:11.268153 | orchestrator | 2026-04-05 00:47:11.268164 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:47:11.268175 | orchestrator | Sunday 05 April 2026 00:47:11 +0000 (0:00:00.170) 0:01:12.581 ********** 2026-04-05 00:47:11.268186 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:47:11.268197 | orchestrator |  "lvm_report": { 2026-04-05 00:47:11.268208 | orchestrator |  "lv": [ 2026-04-05 00:47:11.268219 | orchestrator |  { 2026-04-05 00:47:11.268230 | orchestrator |  "lv_name": "osd-block-ecfcc343-98df-5597-aad3-97c87b883418", 2026-04-05 00:47:11.268261 | orchestrator |  "vg_name": "ceph-ecfcc343-98df-5597-aad3-97c87b883418" 2026-04-05 00:47:11.268272 | orchestrator |  }, 2026-04-05 00:47:11.268283 | orchestrator |  { 2026-04-05 00:47:11.268294 | orchestrator |  "lv_name": "osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603", 2026-04-05 00:47:11.268305 | orchestrator |  "vg_name": "ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603" 2026-04-05 00:47:11.268316 | orchestrator |  } 2026-04-05 00:47:11.268326 | orchestrator |  ], 2026-04-05 00:47:11.268337 | orchestrator |  "pv": [ 2026-04-05 00:47:11.268351 | orchestrator |  { 2026-04-05 00:47:11.268370 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:47:11.268388 | orchestrator |  "vg_name": "ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603" 2026-04-05 00:47:11.268408 | orchestrator |  }, 2026-04-05 00:47:11.268427 | orchestrator |  { 2026-04-05 00:47:11.268447 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:47:11.268463 | orchestrator |  "vg_name": "ceph-ecfcc343-98df-5597-aad3-97c87b883418" 2026-04-05 00:47:11.268474 | orchestrator |  } 2026-04-05 00:47:11.268485 | orchestrator |  ] 2026-04-05 00:47:11.268495 | orchestrator |  } 2026-04-05 00:47:11.268539 | orchestrator | } 2026-04-05 00:47:11.268551 | orchestrator | 2026-04-05 00:47:11.268562 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:47:11.268573 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:11.268587 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:11.268605 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:47:11.268622 | orchestrator | 2026-04-05 00:47:11.268639 | orchestrator | 2026-04-05 00:47:11.268656 | orchestrator | 2026-04-05 00:47:11.268673 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:47:11.268692 | orchestrator | Sunday 05 April 2026 00:47:11 +0000 (0:00:00.142) 0:01:12.724 ********** 2026-04-05 00:47:11.268711 | orchestrator | =============================================================================== 2026-04-05 00:47:11.268728 | orchestrator | Create block VGs -------------------------------------------------------- 5.83s 2026-04-05 00:47:11.268746 | orchestrator | Create block LVs -------------------------------------------------------- 3.96s 2026-04-05 00:47:11.268764 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.92s 2026-04-05 00:47:11.268782 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2026-04-05 00:47:11.268800 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2026-04-05 00:47:11.268812 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2026-04-05 00:47:11.268822 | orchestrator | Add known partitions to the list of available block devices ------------- 1.51s 2026-04-05 00:47:11.268833 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2026-04-05 00:47:11.268859 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2026-04-05 00:47:11.688337 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-04-05 00:47:11.688456 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2026-04-05 00:47:11.688472 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-04-05 00:47:11.688485 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-04-05 00:47:11.688496 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.78s 2026-04-05 00:47:11.688607 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.76s 2026-04-05 00:47:11.688620 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-04-05 00:47:11.688631 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-04-05 00:47:11.688642 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2026-04-05 00:47:11.688653 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-04-05 00:47:11.688664 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.70s 2026-04-05 00:47:23.328483 | orchestrator | 2026-04-05 00:47:23 | INFO  | Prepare task for execution of facts. 2026-04-05 00:47:23.408245 | orchestrator | 2026-04-05 00:47:23 | INFO  | Task 491c7b4e-fc5c-418f-a67f-7ff0dc8ed5b0 (facts) was prepared for execution. 2026-04-05 00:47:23.408344 | orchestrator | 2026-04-05 00:47:23 | INFO  | It takes a moment until task 491c7b4e-fc5c-418f-a67f-7ff0dc8ed5b0 (facts) has been started and output is visible here. 2026-04-05 00:47:35.353309 | orchestrator | 2026-04-05 00:47:35.353410 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:47:35.353426 | orchestrator | 2026-04-05 00:47:35.353440 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:47:35.353452 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-04-05 00:47:35.353463 | orchestrator | ok: [testbed-manager] 2026-04-05 00:47:35.353475 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:47:35.353486 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:47:35.353497 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:47:35.353603 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:47:35.353616 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:35.353627 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:35.353638 | orchestrator | 2026-04-05 00:47:35.353649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:47:35.353679 | orchestrator | Sunday 05 April 2026 00:47:28 +0000 (0:00:01.242) 0:00:01.547 ********** 2026-04-05 00:47:35.353691 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:47:35.353702 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:47:35.353713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:47:35.353724 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:47:35.353735 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:47:35.353746 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:35.353757 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:35.353768 | orchestrator | 2026-04-05 00:47:35.353779 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:47:35.353790 | orchestrator | 2026-04-05 00:47:35.353801 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:47:35.353812 | orchestrator | Sunday 05 April 2026 00:47:29 +0000 (0:00:01.202) 0:00:02.750 ********** 2026-04-05 00:47:35.353823 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:47:35.353834 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:47:35.353845 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:47:35.353855 | orchestrator | ok: [testbed-manager] 2026-04-05 00:47:35.353868 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:47:35.353881 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:47:35.353893 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:47:35.353905 | orchestrator | 2026-04-05 00:47:35.353918 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:47:35.353931 | orchestrator | 2026-04-05 00:47:35.353944 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:47:35.353957 | orchestrator | Sunday 05 April 2026 00:47:34 +0000 (0:00:05.152) 0:00:07.902 ********** 2026-04-05 00:47:35.353970 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:47:35.353983 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:47:35.353995 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:47:35.354086 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:47:35.354102 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:47:35.354115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:47:35.354128 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:47:35.354141 | orchestrator | 2026-04-05 00:47:35.354154 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:47:35.354166 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354178 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354189 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354200 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354211 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354222 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354233 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:47:35.354244 | orchestrator | 2026-04-05 00:47:35.354255 | orchestrator | 2026-04-05 00:47:35.354266 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:47:35.354277 | orchestrator | Sunday 05 April 2026 00:47:35 +0000 (0:00:00.552) 0:00:08.455 ********** 2026-04-05 00:47:35.354288 | orchestrator | =============================================================================== 2026-04-05 00:47:35.354298 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.15s 2026-04-05 00:47:35.354309 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-04-05 00:47:35.354326 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-04-05 00:47:35.354337 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-05 00:47:46.769996 | orchestrator | 2026-04-05 00:47:46 | INFO  | Prepare task for execution of frr. 2026-04-05 00:47:46.854327 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task c3874137-eff5-4c0c-bf40-fc75cd778e1f (frr) was prepared for execution. 2026-04-05 00:47:46.854675 | orchestrator | 2026-04-05 00:47:46 | INFO  | It takes a moment until task c3874137-eff5-4c0c-bf40-fc75cd778e1f (frr) has been started and output is visible here. 2026-04-05 00:48:14.719023 | orchestrator | 2026-04-05 00:48:14.719139 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-05 00:48:14.719154 | orchestrator | 2026-04-05 00:48:14.719166 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-05 00:48:14.719176 | orchestrator | Sunday 05 April 2026 00:47:50 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-04-05 00:48:14.719186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:48:14.719197 | orchestrator | 2026-04-05 00:48:14.719207 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-05 00:48:14.719217 | orchestrator | Sunday 05 April 2026 00:47:50 +0000 (0:00:00.240) 0:00:00.555 ********** 2026-04-05 00:48:14.719227 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:14.719238 | orchestrator | 2026-04-05 00:48:14.719247 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-05 00:48:14.719257 | orchestrator | Sunday 05 April 2026 00:47:52 +0000 (0:00:01.578) 0:00:02.134 ********** 2026-04-05 00:48:14.719292 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:14.719302 | orchestrator | 2026-04-05 00:48:14.719312 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-05 00:48:14.719321 | orchestrator | Sunday 05 April 2026 00:48:02 +0000 (0:00:10.743) 0:00:12.877 ********** 2026-04-05 00:48:14.719331 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:14.719341 | orchestrator | 2026-04-05 00:48:14.719351 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-05 00:48:14.719361 | orchestrator | Sunday 05 April 2026 00:48:03 +0000 (0:00:01.169) 0:00:14.047 ********** 2026-04-05 00:48:14.719371 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:14.719380 | orchestrator | 2026-04-05 00:48:14.719390 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-05 00:48:14.719399 | orchestrator | Sunday 05 April 2026 00:48:04 +0000 (0:00:00.989) 0:00:15.037 ********** 2026-04-05 00:48:14.719409 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:14.719418 | orchestrator | 2026-04-05 00:48:14.719428 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-05 00:48:14.719437 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:01.298) 0:00:16.336 ********** 2026-04-05 00:48:14.719447 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:14.719456 | orchestrator | 2026-04-05 00:48:14.719466 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-05 00:48:14.719475 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.169) 0:00:16.505 ********** 2026-04-05 00:48:14.719485 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:14.719494 | orchestrator | 2026-04-05 00:48:14.719531 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-05 00:48:14.719543 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.342) 0:00:16.848 ********** 2026-04-05 00:48:14.719554 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:14.719565 | orchestrator | 2026-04-05 00:48:14.719577 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-05 00:48:14.719589 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.161) 0:00:17.010 ********** 2026-04-05 00:48:14.719601 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:14.719612 | orchestrator | 2026-04-05 00:48:14.719623 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-05 00:48:14.719634 | orchestrator | Sunday 05 April 2026 00:48:07 +0000 (0:00:00.174) 0:00:17.184 ********** 2026-04-05 00:48:14.719646 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:48:14.719657 | orchestrator | 2026-04-05 00:48:14.719668 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-05 00:48:14.719679 | orchestrator | Sunday 05 April 2026 00:48:07 +0000 (0:00:00.152) 0:00:17.337 ********** 2026-04-05 00:48:14.719690 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:14.719700 | orchestrator | 2026-04-05 00:48:14.719712 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-05 00:48:14.719724 | orchestrator | Sunday 05 April 2026 00:48:08 +0000 (0:00:01.032) 0:00:18.370 ********** 2026-04-05 00:48:14.719735 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-05 00:48:14.719747 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-05 00:48:14.719759 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-05 00:48:14.719770 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-05 00:48:14.719781 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-05 00:48:14.719793 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-05 00:48:14.719805 | orchestrator | 2026-04-05 00:48:14.719816 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-05 00:48:14.719835 | orchestrator | Sunday 05 April 2026 00:48:11 +0000 (0:00:03.360) 0:00:21.730 ********** 2026-04-05 00:48:14.719862 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:14.719874 | orchestrator | 2026-04-05 00:48:14.719884 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-05 00:48:14.719896 | orchestrator | Sunday 05 April 2026 00:48:12 +0000 (0:00:01.248) 0:00:22.979 ********** 2026-04-05 00:48:14.719907 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:14.719917 | orchestrator | 2026-04-05 00:48:14.719927 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:14.719936 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:48:14.719946 | orchestrator | 2026-04-05 00:48:14.719956 | orchestrator | 2026-04-05 00:48:14.719982 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:14.719992 | orchestrator | Sunday 05 April 2026 00:48:14 +0000 (0:00:01.405) 0:00:24.385 ********** 2026-04-05 00:48:14.720002 | orchestrator | =============================================================================== 2026-04-05 00:48:14.720011 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.74s 2026-04-05 00:48:14.720021 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.36s 2026-04-05 00:48:14.720031 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.58s 2026-04-05 00:48:14.720040 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2026-04-05 00:48:14.720050 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.30s 2026-04-05 00:48:14.720059 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.25s 2026-04-05 00:48:14.720069 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.17s 2026-04-05 00:48:14.720078 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-04-05 00:48:14.720088 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2026-04-05 00:48:14.720097 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.34s 2026-04-05 00:48:14.720107 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-04-05 00:48:14.720116 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-04-05 00:48:14.720126 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-04-05 00:48:14.720136 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-05 00:48:14.720145 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-05 00:48:14.920140 | orchestrator | 2026-04-05 00:48:14.924174 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Apr 5 00:48:14 UTC 2026 2026-04-05 00:48:14.924218 | orchestrator | 2026-04-05 00:48:16.119398 | orchestrator | 2026-04-05 00:48:16 | INFO  | Collection nutshell is prepared for execution 2026-04-05 00:48:16.249768 | orchestrator | 2026-04-05 00:48:16 | INFO  | A [0] - dotfiles 2026-04-05 00:48:26.359134 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - homer 2026-04-05 00:48:26.359248 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - netdata 2026-04-05 00:48:26.359263 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - openstackclient 2026-04-05 00:48:26.359275 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - phpmyadmin 2026-04-05 00:48:26.359286 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - common 2026-04-05 00:48:26.364265 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- loadbalancer 2026-04-05 00:48:26.365101 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [2] --- opensearch 2026-04-05 00:48:26.366416 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [2] --- mariadb-ng 2026-04-05 00:48:26.366676 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [3] ---- horizon 2026-04-05 00:48:26.366713 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [3] ---- keystone 2026-04-05 00:48:26.368000 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- neutron 2026-04-05 00:48:26.368072 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ wait-for-nova 2026-04-05 00:48:26.368088 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [6] ------- octavia 2026-04-05 00:48:26.370227 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- barbican 2026-04-05 00:48:26.370292 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- designate 2026-04-05 00:48:26.371341 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- ironic 2026-04-05 00:48:26.371533 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- placement 2026-04-05 00:48:26.372000 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- magnum 2026-04-05 00:48:26.374448 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- openvswitch 2026-04-05 00:48:26.374679 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [2] --- ovn 2026-04-05 00:48:26.375814 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- memcached 2026-04-05 00:48:26.376084 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- redis 2026-04-05 00:48:26.376749 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- rabbitmq-ng 2026-04-05 00:48:26.377307 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - kubernetes 2026-04-05 00:48:26.382679 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- kubeconfig 2026-04-05 00:48:26.382737 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- copy-kubeconfig 2026-04-05 00:48:26.382759 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [0] - ceph 2026-04-05 00:48:26.386235 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [1] -- ceph-pools 2026-04-05 00:48:26.386282 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [2] --- copy-ceph-keys 2026-04-05 00:48:26.387101 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [3] ---- cephclient 2026-04-05 00:48:26.387158 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-05 00:48:26.387179 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- wait-for-keystone 2026-04-05 00:48:26.387451 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-05 00:48:26.387479 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ glance 2026-04-05 00:48:26.387745 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ cinder 2026-04-05 00:48:26.388039 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ nova 2026-04-05 00:48:26.388389 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [4] ----- prometheus 2026-04-05 00:48:26.388410 | orchestrator | 2026-04-05 00:48:26 | INFO  | A [5] ------ grafana 2026-04-05 00:48:26.632663 | orchestrator | 2026-04-05 00:48:26 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-05 00:48:26.632749 | orchestrator | 2026-04-05 00:48:26 | INFO  | Tasks are running in the background 2026-04-05 00:48:28.714321 | orchestrator | 2026-04-05 00:48:28 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-05 00:48:30.954894 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:30.956727 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:30.957649 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:30.959203 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:30.960166 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:30.964463 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:30.965369 | orchestrator | 2026-04-05 00:48:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:30.965404 | orchestrator | 2026-04-05 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:34.003392 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:34.007472 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:34.009849 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:34.017483 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:34.019221 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:34.023202 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:34.027289 | orchestrator | 2026-04-05 00:48:34 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:34.027355 | orchestrator | 2026-04-05 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:37.157950 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:37.158417 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:37.167840 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:37.168607 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:37.169844 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:37.172593 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:37.173354 | orchestrator | 2026-04-05 00:48:37 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:37.173395 | orchestrator | 2026-04-05 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:40.215815 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:40.218153 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:40.218902 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:40.219903 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:40.221896 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:40.223307 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:40.225305 | orchestrator | 2026-04-05 00:48:40 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:40.225375 | orchestrator | 2026-04-05 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:43.372155 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:43.372246 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:43.372256 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:43.372264 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:43.372272 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:43.372279 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:43.372287 | orchestrator | 2026-04-05 00:48:43 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:43.372294 | orchestrator | 2026-04-05 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:46.416608 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:46.418349 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:46.425489 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:46.426944 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:46.429661 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:46.436086 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:46.439493 | orchestrator | 2026-04-05 00:48:46 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:46.443443 | orchestrator | 2026-04-05 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:50.147380 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:50.148162 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:50.152868 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:50.152921 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:50.152931 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:50.160040 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:50.160093 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:50.160102 | orchestrator | 2026-04-05 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:53.376895 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:53.377878 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:53.379449 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:53.381476 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:53.382261 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:53.383251 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:53.384542 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:53.384877 | orchestrator | 2026-04-05 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:56.462268 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:56.468137 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:56.468960 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:56.473880 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:56.477322 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:56.477865 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:56.482096 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:56.482129 | orchestrator | 2026-04-05 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:59.617572 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state STARTED 2026-04-05 00:48:59.617669 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:48:59.617684 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:48:59.617694 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:48:59.617704 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:48:59.617714 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:48:59.617725 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:48:59.617735 | orchestrator | 2026-04-05 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:02.861750 | orchestrator | 2026-04-05 00:49:02.861843 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-05 00:49:02.861859 | orchestrator | 2026-04-05 00:49:02.861871 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-05 00:49:02.861882 | orchestrator | Sunday 05 April 2026 00:48:38 +0000 (0:00:01.317) 0:00:01.317 ********** 2026-04-05 00:49:02.861894 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:02.861905 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:02.861916 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:02.861927 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:02.861938 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:02.861949 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:02.861959 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:02.861970 | orchestrator | 2026-04-05 00:49:02.861982 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-05 00:49:02.862072 | orchestrator | Sunday 05 April 2026 00:48:45 +0000 (0:00:06.992) 0:00:08.310 ********** 2026-04-05 00:49:02.862087 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:02.862098 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:02.862110 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:02.862121 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:02.862132 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:02.862143 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:02.862154 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:02.862165 | orchestrator | 2026-04-05 00:49:02.862176 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-05 00:49:02.862189 | orchestrator | Sunday 05 April 2026 00:48:50 +0000 (0:00:05.129) 0:00:13.439 ********** 2026-04-05 00:49:02.862212 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:48.530271', 'end': '2026-04-05 00:48:48.536932', 'delta': '0:00:00.006661', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862232 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:48.753026', 'end': '2026-04-05 00:48:48.761991', 'delta': '0:00:00.008965', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862244 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:48.375569', 'end': '2026-04-05 00:48:48.385269', 'delta': '0:00:00.009700', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862282 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:48.621910', 'end': '2026-04-05 00:48:48.630593', 'delta': '0:00:00.008683', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862305 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:48.638267', 'end': '2026-04-05 00:48:48.644257', 'delta': '0:00:00.005990', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862325 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:50.402150', 'end': '2026-04-05 00:48:50.407868', 'delta': '0:00:00.005718', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862339 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:48:49.534635', 'end': '2026-04-05 00:48:49.541350', 'delta': '0:00:00.006715', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:49:02.862353 | orchestrator | 2026-04-05 00:49:02.862367 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-05 00:49:02.862387 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:03.162) 0:00:16.602 ********** 2026-04-05 00:49:02.862407 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:02.862424 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:02.862445 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:02.862465 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:02.862486 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:02.862528 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:02.862545 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:02.862558 | orchestrator | 2026-04-05 00:49:02.862571 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-05 00:49:02.862584 | orchestrator | Sunday 05 April 2026 00:48:57 +0000 (0:00:03.880) 0:00:20.483 ********** 2026-04-05 00:49:02.862597 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:49:02.862611 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:49:02.862633 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:49:02.862646 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:49:02.862658 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:49:02.862668 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:49:02.862680 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:49:02.862690 | orchestrator | 2026-04-05 00:49:02.862702 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:49:02.862722 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862734 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862746 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862757 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862768 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862779 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862790 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:02.862801 | orchestrator | 2026-04-05 00:49:02.862812 | orchestrator | 2026-04-05 00:49:02.862823 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:49:02.862834 | orchestrator | Sunday 05 April 2026 00:49:00 +0000 (0:00:02.942) 0:00:23.426 ********** 2026-04-05 00:49:02.862845 | orchestrator | =============================================================================== 2026-04-05 00:49:02.862856 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.99s 2026-04-05 00:49:02.862867 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 5.13s 2026-04-05 00:49:02.862878 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.88s 2026-04-05 00:49:02.862895 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.16s 2026-04-05 00:49:02.862906 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.94s 2026-04-05 00:49:02.862917 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task b63f704f-8a06-440e-962a-48fb99959986 is in state SUCCESS 2026-04-05 00:49:02.862928 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:02.862939 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:02.863798 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:02.863826 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:02.864722 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:02.866700 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:02.866731 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:02.866743 | orchestrator | 2026-04-05 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:05.968537 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:05.969063 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:05.972751 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:05.973545 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:05.974130 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:05.975298 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:05.979550 | orchestrator | 2026-04-05 00:49:05 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:05.979618 | orchestrator | 2026-04-05 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:09.035870 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:09.035947 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:09.039440 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:09.047007 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:09.048764 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:09.048818 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:09.059853 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:09.059905 | orchestrator | 2026-04-05 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:12.102261 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:12.103426 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:12.105923 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:12.107619 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:12.109889 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:12.110969 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:12.112240 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:12.112278 | orchestrator | 2026-04-05 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:15.182144 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:15.182371 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:15.184140 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:15.184910 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:15.186547 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:15.186865 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:15.188059 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:15.188088 | orchestrator | 2026-04-05 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:18.266862 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:18.271208 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:18.274134 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:18.278176 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:18.280435 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:18.282836 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:18.284662 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:18.285734 | orchestrator | 2026-04-05 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:21.350319 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:21.350401 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:21.350409 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:21.350414 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:21.350419 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:21.350424 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:21.350429 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:21.350434 | orchestrator | 2026-04-05 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:24.657869 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:24.657960 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:24.657969 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:24.657976 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:24.657983 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:24.657989 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:24.657996 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:24.658004 | orchestrator | 2026-04-05 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:27.500198 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:27.500299 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:27.500314 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:27.500326 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:27.500337 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:27.500348 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:27.500359 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:27.500370 | orchestrator | 2026-04-05 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:30.625578 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:30.625669 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:30.625680 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:30.625688 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:30.625696 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:30.625703 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:30.625711 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:30.625718 | orchestrator | 2026-04-05 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:33.724348 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:33.724438 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state STARTED 2026-04-05 00:49:33.724453 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:33.724464 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:33.724471 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:33.724477 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:33.724485 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:33.724881 | orchestrator | 2026-04-05 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:36.846626 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:36.846733 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task a2056ebb-b621-4af7-b91f-5499e1619fe3 is in state SUCCESS 2026-04-05 00:49:36.846747 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:36.846760 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:36.846799 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:36.846811 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:36.846822 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:36.846833 | orchestrator | 2026-04-05 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:39.887290 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:39.887407 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:39.887424 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:39.887436 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:39.887447 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:39.887458 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:39.887469 | orchestrator | 2026-04-05 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:43.124408 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:43.136063 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:43.143904 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:43.145338 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state STARTED 2026-04-05 00:49:43.148519 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:43.153024 | orchestrator | 2026-04-05 00:49:43 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:43.153088 | orchestrator | 2026-04-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:46.414383 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:46.414453 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:46.414459 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:46.414464 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 8967e8e6-6f0d-4a0e-b78b-8c31876bed5d is in state SUCCESS 2026-04-05 00:49:46.414470 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:46.414477 | orchestrator | 2026-04-05 00:49:46 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:46.414484 | orchestrator | 2026-04-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:49.367268 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:49.368673 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:49.370170 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:49.371599 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:49.372771 | orchestrator | 2026-04-05 00:49:49 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:49.373546 | orchestrator | 2026-04-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:52.444582 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:52.447302 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:52.449116 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:52.450625 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:52.453589 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:52.453632 | orchestrator | 2026-04-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:55.542820 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:55.544005 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:55.546535 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:55.548363 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:55.551206 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:55.551281 | orchestrator | 2026-04-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:58.619696 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state STARTED 2026-04-05 00:49:58.620957 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:49:58.624085 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:49:58.625109 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:49:58.627595 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:49:58.627657 | orchestrator | 2026-04-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:01.679602 | orchestrator | 2026-04-05 00:50:01.679821 | orchestrator | 2026-04-05 00:50:01.679854 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-05 00:50:01.679874 | orchestrator | 2026-04-05 00:50:01.679896 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-05 00:50:01.679918 | orchestrator | Sunday 05 April 2026 00:48:39 +0000 (0:00:01.394) 0:00:01.405 ********** 2026-04-05 00:50:01.680000 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:50:01.680029 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-05 00:50:01.680055 | orchestrator | } 2026-04-05 00:50:01.680175 | orchestrator | 2026-04-05 00:50:01.680193 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-05 00:50:01.680250 | orchestrator | Sunday 05 April 2026 00:48:39 +0000 (0:00:00.778) 0:00:02.183 ********** 2026-04-05 00:50:01.680270 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:01.680289 | orchestrator | 2026-04-05 00:50:01.680306 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-05 00:50:01.680355 | orchestrator | Sunday 05 April 2026 00:48:44 +0000 (0:00:04.577) 0:00:06.761 ********** 2026-04-05 00:50:01.680376 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-05 00:50:01.680394 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-05 00:50:01.680411 | orchestrator | 2026-04-05 00:50:01.680428 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-05 00:50:01.680445 | orchestrator | Sunday 05 April 2026 00:48:50 +0000 (0:00:05.611) 0:00:12.372 ********** 2026-04-05 00:50:01.680462 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.680480 | orchestrator | 2026-04-05 00:50:01.680498 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-05 00:50:01.680605 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:03.169) 0:00:15.542 ********** 2026-04-05 00:50:01.680622 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.680640 | orchestrator | 2026-04-05 00:50:01.680658 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-05 00:50:01.680677 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:02.725) 0:00:18.267 ********** 2026-04-05 00:50:01.680696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-05 00:50:01.680713 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:01.680732 | orchestrator | 2026-04-05 00:50:01.680765 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-05 00:50:01.680783 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:32.037) 0:00:50.305 ********** 2026-04-05 00:50:01.680801 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.680820 | orchestrator | 2026-04-05 00:50:01.680838 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:01.680859 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:01.680927 | orchestrator | 2026-04-05 00:50:01.680949 | orchestrator | 2026-04-05 00:50:01.680960 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:01.680972 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:05.136) 0:00:55.441 ********** 2026-04-05 00:50:01.680983 | orchestrator | =============================================================================== 2026-04-05 00:50:01.680994 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 32.04s 2026-04-05 00:50:01.681005 | orchestrator | osism.services.homer : Create required directories ---------------------- 5.61s 2026-04-05 00:50:01.681016 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.14s 2026-04-05 00:50:01.681026 | orchestrator | osism.services.homer : Create traefik external network ------------------ 4.58s 2026-04-05 00:50:01.681038 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.17s 2026-04-05 00:50:01.681049 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.73s 2026-04-05 00:50:01.681060 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.78s 2026-04-05 00:50:01.681071 | orchestrator | 2026-04-05 00:50:01.681082 | orchestrator | 2026-04-05 00:50:01.681093 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-05 00:50:01.681107 | orchestrator | 2026-04-05 00:50:01.681184 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-05 00:50:01.681203 | orchestrator | Sunday 05 April 2026 00:48:38 +0000 (0:00:01.076) 0:00:01.076 ********** 2026-04-05 00:50:01.681221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-05 00:50:01.681240 | orchestrator | 2026-04-05 00:50:01.681258 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-05 00:50:01.681275 | orchestrator | Sunday 05 April 2026 00:48:39 +0000 (0:00:00.712) 0:00:01.788 ********** 2026-04-05 00:50:01.681312 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-05 00:50:01.681330 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-05 00:50:01.681402 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-05 00:50:01.681413 | orchestrator | 2026-04-05 00:50:01.681423 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-05 00:50:01.681432 | orchestrator | Sunday 05 April 2026 00:48:44 +0000 (0:00:05.145) 0:00:06.933 ********** 2026-04-05 00:50:01.681484 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.681494 | orchestrator | 2026-04-05 00:50:01.681602 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-05 00:50:01.681615 | orchestrator | Sunday 05 April 2026 00:48:48 +0000 (0:00:03.777) 0:00:10.711 ********** 2026-04-05 00:50:01.681652 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-05 00:50:01.681664 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:01.681674 | orchestrator | 2026-04-05 00:50:01.681684 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-05 00:50:01.681694 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:41.134) 0:00:51.845 ********** 2026-04-05 00:50:01.681704 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.681715 | orchestrator | 2026-04-05 00:50:01.681725 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-05 00:50:01.681735 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:04.577) 0:00:56.423 ********** 2026-04-05 00:50:01.681745 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:01.681755 | orchestrator | 2026-04-05 00:50:01.681765 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-05 00:50:01.681775 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.761) 0:00:58.185 ********** 2026-04-05 00:50:01.681785 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.681795 | orchestrator | 2026-04-05 00:50:01.681805 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-05 00:50:01.681815 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:03.216) 0:01:01.401 ********** 2026-04-05 00:50:01.681825 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.681835 | orchestrator | 2026-04-05 00:50:01.681846 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-05 00:50:01.681856 | orchestrator | Sunday 05 April 2026 00:49:40 +0000 (0:00:02.042) 0:01:03.443 ********** 2026-04-05 00:50:01.681866 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.681876 | orchestrator | 2026-04-05 00:50:01.681886 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-05 00:50:01.681896 | orchestrator | Sunday 05 April 2026 00:49:41 +0000 (0:00:00.896) 0:01:04.340 ********** 2026-04-05 00:50:01.681906 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:01.681916 | orchestrator | 2026-04-05 00:50:01.681927 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:01.681937 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:01.681947 | orchestrator | 2026-04-05 00:50:01.681957 | orchestrator | 2026-04-05 00:50:01.681967 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:01.681991 | orchestrator | Sunday 05 April 2026 00:49:42 +0000 (0:00:01.023) 0:01:05.364 ********** 2026-04-05 00:50:01.682001 | orchestrator | =============================================================================== 2026-04-05 00:50:01.682011 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 41.13s 2026-04-05 00:50:01.682074 | orchestrator | osism.services.openstackclient : Create required directories ------------ 5.15s 2026-04-05 00:50:01.682085 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.58s 2026-04-05 00:50:01.682095 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.78s 2026-04-05 00:50:01.682116 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.22s 2026-04-05 00:50:01.682161 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.04s 2026-04-05 00:50:01.682171 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.76s 2026-04-05 00:50:01.682180 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.02s 2026-04-05 00:50:01.682190 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.90s 2026-04-05 00:50:01.682200 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.71s 2026-04-05 00:50:01.682209 | orchestrator | 2026-04-05 00:50:01.682219 | orchestrator | 2026-04-05 00:50:01.682228 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-05 00:50:01.682238 | orchestrator | 2026-04-05 00:50:01.682248 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:50:01.682257 | orchestrator | Sunday 05 April 2026 00:48:30 +0000 (0:00:00.366) 0:00:00.366 ********** 2026-04-05 00:50:01.682267 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:01.682277 | orchestrator | 2026-04-05 00:50:01.682287 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-05 00:50:01.682296 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:01.399) 0:00:01.765 ********** 2026-04-05 00:50:01.682306 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682315 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682325 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682335 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682344 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682354 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682364 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682373 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682383 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682392 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682402 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682438 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682449 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682459 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682469 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:50:01.682478 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682546 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682556 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:50:01.682566 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682577 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682586 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:50:01.682611 | orchestrator | 2026-04-05 00:50:01.682631 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:50:01.682641 | orchestrator | Sunday 05 April 2026 00:48:36 +0000 (0:00:04.017) 0:00:05.782 ********** 2026-04-05 00:50:01.682651 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:01.682662 | orchestrator | 2026-04-05 00:50:01.682672 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-05 00:50:01.682681 | orchestrator | Sunday 05 April 2026 00:48:37 +0000 (0:00:01.419) 0:00:07.202 ********** 2026-04-05 00:50:01.682703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682741 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.682826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682836 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.682998 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.683008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.683042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.683060 | orchestrator | 2026-04-05 00:50:01.683070 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-05 00:50:01.683081 | orchestrator | Sunday 05 April 2026 00:48:41 +0000 (0:00:04.422) 0:00:11.624 ********** 2026-04-05 00:50:01.683091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683225 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683289 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.683326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683339 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.683349 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.683359 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.683369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683424 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.683434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683470 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.683480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683546 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.683556 | orchestrator | 2026-04-05 00:50:01.683566 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-05 00:50:01.683576 | orchestrator | Sunday 05 April 2026 00:48:48 +0000 (0:00:06.307) 0:00:17.932 ********** 2026-04-05 00:50:01.683613 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683688 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.683751 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683858 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.683869 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.683879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683889 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.683899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683909 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.683934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.683971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.683982 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.683992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.684009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.684019 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.684069 | orchestrator | 2026-04-05 00:50:01.684081 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-05 00:50:01.684091 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:05.965) 0:00:23.898 ********** 2026-04-05 00:50:01.684101 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.684110 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.684120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.684129 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.684139 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.684148 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.684157 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.684167 | orchestrator | 2026-04-05 00:50:01.684176 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-05 00:50:01.684186 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:02.376) 0:00:26.274 ********** 2026-04-05 00:50:01.684195 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.684205 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.684214 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.684224 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.684234 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.684243 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.684269 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.684280 | orchestrator | 2026-04-05 00:50:01.684290 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-05 00:50:01.684299 | orchestrator | Sunday 05 April 2026 00:48:58 +0000 (0:00:01.893) 0:00:28.168 ********** 2026-04-05 00:50:01.684309 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.684319 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.684328 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.684338 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.684347 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.684357 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.684366 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.684376 | orchestrator | 2026-04-05 00:50:01.684385 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-05 00:50:01.684395 | orchestrator | Sunday 05 April 2026 00:49:00 +0000 (0:00:02.213) 0:00:30.384 ********** 2026-04-05 00:50:01.684405 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:01.684414 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.684424 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:01.684433 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:01.684442 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:01.684452 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:01.684461 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:01.684471 | orchestrator | 2026-04-05 00:50:01.684480 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-05 00:50:01.684490 | orchestrator | Sunday 05 April 2026 00:49:05 +0000 (0:00:04.701) 0:00:35.085 ********** 2026-04-05 00:50:01.684565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684590 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.684695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684753 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684857 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.684870 | orchestrator | 2026-04-05 00:50:01.684890 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-05 00:50:01.684904 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:05.310) 0:00:40.395 ********** 2026-04-05 00:50:01.684918 | orchestrator | [WARNING]: Skipped 2026-04-05 00:50:01.684929 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-05 00:50:01.684937 | orchestrator | to this access issue: 2026-04-05 00:50:01.684945 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-05 00:50:01.684953 | orchestrator | directory 2026-04-05 00:50:01.684970 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:50:01.684978 | orchestrator | 2026-04-05 00:50:01.684985 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-05 00:50:01.684993 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:01.354) 0:00:41.749 ********** 2026-04-05 00:50:01.685001 | orchestrator | [WARNING]: Skipped 2026-04-05 00:50:01.685009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-05 00:50:01.685017 | orchestrator | to this access issue: 2026-04-05 00:50:01.685025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-05 00:50:01.685033 | orchestrator | directory 2026-04-05 00:50:01.685041 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:50:01.685049 | orchestrator | 2026-04-05 00:50:01.685056 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-05 00:50:01.685067 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:01.143) 0:00:42.893 ********** 2026-04-05 00:50:01.685080 | orchestrator | [WARNING]: Skipped 2026-04-05 00:50:01.685093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-05 00:50:01.685105 | orchestrator | to this access issue: 2026-04-05 00:50:01.685118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-05 00:50:01.685130 | orchestrator | directory 2026-04-05 00:50:01.685142 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:50:01.685155 | orchestrator | 2026-04-05 00:50:01.685169 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-05 00:50:01.685181 | orchestrator | Sunday 05 April 2026 00:49:14 +0000 (0:00:01.323) 0:00:44.216 ********** 2026-04-05 00:50:01.685195 | orchestrator | [WARNING]: Skipped 2026-04-05 00:50:01.685209 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-05 00:50:01.685222 | orchestrator | to this access issue: 2026-04-05 00:50:01.685240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-05 00:50:01.685249 | orchestrator | directory 2026-04-05 00:50:01.685257 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:50:01.685265 | orchestrator | 2026-04-05 00:50:01.685273 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-05 00:50:01.685281 | orchestrator | Sunday 05 April 2026 00:49:15 +0000 (0:00:01.516) 0:00:45.733 ********** 2026-04-05 00:50:01.685289 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:01.685297 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:01.685305 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.685313 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:01.685321 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:01.685329 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:01.685337 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:01.685345 | orchestrator | 2026-04-05 00:50:01.685353 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-05 00:50:01.685361 | orchestrator | Sunday 05 April 2026 00:49:23 +0000 (0:00:07.192) 0:00:52.925 ********** 2026-04-05 00:50:01.685370 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685378 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685402 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685410 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685418 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:50:01.685438 | orchestrator | 2026-04-05 00:50:01.685447 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-05 00:50:01.685455 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:05.573) 0:00:58.499 ********** 2026-04-05 00:50:01.685463 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:01.685471 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:01.685479 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:01.685487 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:01.685495 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:01.685527 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:01.685536 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.685578 | orchestrator | 2026-04-05 00:50:01.685589 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-05 00:50:01.685602 | orchestrator | Sunday 05 April 2026 00:49:34 +0000 (0:00:05.617) 0:01:04.117 ********** 2026-04-05 00:50:01.685635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685720 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685775 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685792 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685828 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685836 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685845 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685868 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.685880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.685889 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685903 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.685912 | orchestrator | 2026-04-05 00:50:01.685920 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-05 00:50:01.685927 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:03.398) 0:01:07.516 ********** 2026-04-05 00:50:01.685935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685943 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685951 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685959 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685967 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685975 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685982 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:50:01.685990 | orchestrator | 2026-04-05 00:50:01.685998 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-05 00:50:01.686006 | orchestrator | Sunday 05 April 2026 00:49:41 +0000 (0:00:03.400) 0:01:10.916 ********** 2026-04-05 00:50:01.686046 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686056 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686072 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686093 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686101 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:50:01.686109 | orchestrator | 2026-04-05 00:50:01.686117 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-05 00:50:01.686125 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:03.661) 0:01:14.577 ********** 2026-04-05 00:50:01.686133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686166 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:50:01.686213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686254 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686336 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:50:01.686353 | orchestrator | 2026-04-05 00:50:01.686361 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-05 00:50:01.686369 | orchestrator | Sunday 05 April 2026 00:49:49 +0000 (0:00:04.533) 0:01:19.111 ********** 2026-04-05 00:50:01.686377 | orchestrator | changed: [testbed-manager] => { 2026-04-05 00:50:01.686385 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686394 | orchestrator | } 2026-04-05 00:50:01.686402 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:50:01.686410 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686417 | orchestrator | } 2026-04-05 00:50:01.686425 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:50:01.686433 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686441 | orchestrator | } 2026-04-05 00:50:01.686449 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:50:01.686489 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686497 | orchestrator | } 2026-04-05 00:50:01.686558 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:50:01.686613 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686629 | orchestrator | } 2026-04-05 00:50:01.686643 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:50:01.686665 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686679 | orchestrator | } 2026-04-05 00:50:01.686687 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:50:01.686701 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:01.686713 | orchestrator | } 2026-04-05 00:50:01.686737 | orchestrator | 2026-04-05 00:50:01.686751 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:50:01.686765 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:00.853) 0:01:19.964 ********** 2026-04-05 00:50:01.686779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686793 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686817 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:50:01.686825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:01.686930 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:01.686937 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:01.686945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.686980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.686997 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:01.687005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.687018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.687027 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:01.687035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:50:01.687043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.687051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:50:01.687065 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:01.687073 | orchestrator | 2026-04-05 00:50:01.687081 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-05 00:50:01.687089 | orchestrator | Sunday 05 April 2026 00:49:52 +0000 (0:00:02.060) 0:01:22.025 ********** 2026-04-05 00:50:01.687097 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.687105 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:01.687113 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:01.687120 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:01.687128 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:01.687136 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:01.687144 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:01.687151 | orchestrator | 2026-04-05 00:50:01.687164 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-05 00:50:01.687172 | orchestrator | Sunday 05 April 2026 00:49:54 +0000 (0:00:01.911) 0:01:23.937 ********** 2026-04-05 00:50:01.687180 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:01.687187 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:01.687195 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:01.687203 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:01.687211 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:01.687219 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:01.687227 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:01.687234 | orchestrator | 2026-04-05 00:50:01.687242 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687250 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:01.636) 0:01:25.573 ********** 2026-04-05 00:50:01.687280 | orchestrator | 2026-04-05 00:50:01.687295 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687313 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.091) 0:01:25.664 ********** 2026-04-05 00:50:01.687333 | orchestrator | 2026-04-05 00:50:01.687345 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687357 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.094) 0:01:25.759 ********** 2026-04-05 00:50:01.687369 | orchestrator | 2026-04-05 00:50:01.687381 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687394 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.102) 0:01:25.861 ********** 2026-04-05 00:50:01.687406 | orchestrator | 2026-04-05 00:50:01.687418 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687429 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.061) 0:01:25.923 ********** 2026-04-05 00:50:01.687440 | orchestrator | 2026-04-05 00:50:01.687450 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687463 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.061) 0:01:25.985 ********** 2026-04-05 00:50:01.687476 | orchestrator | 2026-04-05 00:50:01.687488 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:50:01.687564 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.058) 0:01:26.043 ********** 2026-04-05 00:50:01.687583 | orchestrator | 2026-04-05 00:50:01.687597 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-05 00:50:01.687619 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.082) 0:01:26.126 ********** 2026-04-05 00:50:01.687649 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_cymywmv2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_cymywmv2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_cymywmv2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_cymywmv2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687676 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_sjxvqfiu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_sjxvqfiu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_sjxvqfiu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_sjxvqfiu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687698 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_s9aqpo1a/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_s9aqpo1a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_s9aqpo1a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_s9aqpo1a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687714 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1v6iy40c/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1v6iy40c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1v6iy40c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1v6iy40c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687738 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f_z2fmjz/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f_z2fmjz/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_f_z2fmjz/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f_z2fmjz/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687748 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_w36_e3gb/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_w36_e3gb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_w36_e3gb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_w36_e3gb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687779 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_n1b7eqvk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_n1b7eqvk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_n1b7eqvk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_n1b7eqvk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:01.687794 | orchestrator | 2026-04-05 00:50:01.687802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:01.687811 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687820 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687828 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687836 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687844 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687852 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687860 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:50:01.687867 | orchestrator | 2026-04-05 00:50:01.687875 | orchestrator | 2026-04-05 00:50:01.687883 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:01.687892 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:04.481) 0:01:30.607 ********** 2026-04-05 00:50:01.687900 | orchestrator | =============================================================================== 2026-04-05 00:50:01.687908 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.19s 2026-04-05 00:50:01.687915 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 6.31s 2026-04-05 00:50:01.687924 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.97s 2026-04-05 00:50:01.687936 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.62s 2026-04-05 00:50:01.687944 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.57s 2026-04-05 00:50:01.687952 | orchestrator | common : Copying over config.json files for services -------------------- 5.31s 2026-04-05 00:50:01.687960 | orchestrator | common : Copying over kolla.target -------------------------------------- 4.70s 2026-04-05 00:50:01.687969 | orchestrator | service-check-containers : common | Check containers -------------------- 4.53s 2026-04-05 00:50:01.687976 | orchestrator | common : Restart fluentd container -------------------------------------- 4.48s 2026-04-05 00:50:01.687984 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.42s 2026-04-05 00:50:01.687992 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.02s 2026-04-05 00:50:01.688000 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.66s 2026-04-05 00:50:01.688008 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.40s 2026-04-05 00:50:01.688016 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.40s 2026-04-05 00:50:01.688024 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.38s 2026-04-05 00:50:01.688032 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.22s 2026-04-05 00:50:01.688048 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-04-05 00:50:01.688057 | orchestrator | common : Creating log volume -------------------------------------------- 1.91s 2026-04-05 00:50:01.688065 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.89s 2026-04-05 00:50:01.688073 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.64s 2026-04-05 00:50:01.688081 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task af3f8de3-2412-4c73-b082-0c7644b93d96 is in state SUCCESS 2026-04-05 00:50:01.688089 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:01.688100 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:01.688107 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:01.688114 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:01.688121 | orchestrator | 2026-04-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:04.746610 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:04.747948 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:04.750313 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:04.751216 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:04.755547 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:04.756345 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:04.757205 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:04.758370 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:04.758422 | orchestrator | 2026-04-05 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:07.802127 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:07.803226 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:07.804206 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:07.805576 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:07.806685 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:07.811425 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:07.814484 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:07.815774 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:07.815816 | orchestrator | 2026-04-05 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:10.885858 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:10.888145 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:10.889945 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:10.891008 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:10.891881 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:10.892524 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:10.893182 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:10.894103 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:10.894147 | orchestrator | 2026-04-05 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:13.954398 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:13.954472 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:13.958255 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:13.958316 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:13.958577 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:13.965281 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:13.966939 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:13.968160 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:13.968296 | orchestrator | 2026-04-05 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:17.162389 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:17.164773 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:17.168945 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:17.173325 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:17.176612 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:17.182969 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:17.188263 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:17.194239 | orchestrator | 2026-04-05 00:50:17 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:17.194342 | orchestrator | 2026-04-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:20.248064 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:20.249617 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:20.252466 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:20.253766 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:20.256168 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:20.259685 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:20.260671 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:20.261683 | orchestrator | 2026-04-05 00:50:20 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:20.261701 | orchestrator | 2026-04-05 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:23.302350 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:23.305729 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:23.309984 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:23.311209 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:23.312739 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:23.315948 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:23.318746 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:23.320459 | orchestrator | 2026-04-05 00:50:23 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state STARTED 2026-04-05 00:50:23.320792 | orchestrator | 2026-04-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:26.410470 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:26.410654 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:26.410671 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:26.410701 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state STARTED 2026-04-05 00:50:26.410713 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:26.410724 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:26.410735 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:26.410746 | orchestrator | 2026-04-05 00:50:26 | INFO  | Task 20b80d4d-c314-4343-b1db-f30dec724d0d is in state SUCCESS 2026-04-05 00:50:26.410757 | orchestrator | 2026-04-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:29.449286 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:29.449387 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:29.450740 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:29.451650 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 9422e6ba-d330-48fd-9a2d-99f106f6bd92 is in state SUCCESS 2026-04-05 00:50:29.452862 | orchestrator | 2026-04-05 00:50:29.452895 | orchestrator | 2026-04-05 00:50:29.452901 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-05 00:50:29.452907 | orchestrator | 2026-04-05 00:50:29.452912 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-05 00:50:29.452917 | orchestrator | Sunday 05 April 2026 00:49:05 +0000 (0:00:00.374) 0:00:00.374 ********** 2026-04-05 00:50:29.452921 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.452927 | orchestrator | 2026-04-05 00:50:29.452932 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-05 00:50:29.452937 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:02.051) 0:00:02.425 ********** 2026-04-05 00:50:29.452942 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-05 00:50:29.452947 | orchestrator | 2026-04-05 00:50:29.452951 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-05 00:50:29.452956 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:00.840) 0:00:03.265 ********** 2026-04-05 00:50:29.452961 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.452965 | orchestrator | 2026-04-05 00:50:29.452970 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-05 00:50:29.452974 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:02.138) 0:00:05.403 ********** 2026-04-05 00:50:29.452978 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-05 00:50:29.452983 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.452987 | orchestrator | 2026-04-05 00:50:29.452992 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-05 00:50:29.452996 | orchestrator | Sunday 05 April 2026 00:50:15 +0000 (0:01:04.970) 0:01:10.374 ********** 2026-04-05 00:50:29.453000 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453005 | orchestrator | 2026-04-05 00:50:29.453010 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:29.453014 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.453021 | orchestrator | 2026-04-05 00:50:29.453026 | orchestrator | 2026-04-05 00:50:29.453030 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:29.453035 | orchestrator | Sunday 05 April 2026 00:50:24 +0000 (0:00:09.275) 0:01:19.649 ********** 2026-04-05 00:50:29.453039 | orchestrator | =============================================================================== 2026-04-05 00:50:29.453044 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 64.97s 2026-04-05 00:50:29.453048 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.28s 2026-04-05 00:50:29.453052 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.14s 2026-04-05 00:50:29.453056 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.05s 2026-04-05 00:50:29.453061 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.84s 2026-04-05 00:50:29.453065 | orchestrator | 2026-04-05 00:50:29.453070 | orchestrator | 2026-04-05 00:50:29.453074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:29.453078 | orchestrator | 2026-04-05 00:50:29.453083 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:29.453087 | orchestrator | Sunday 05 April 2026 00:48:38 +0000 (0:00:01.074) 0:00:01.074 ********** 2026-04-05 00:50:29.453092 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-05 00:50:29.453096 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-05 00:50:29.453101 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-05 00:50:29.453105 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-05 00:50:29.453123 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-05 00:50:29.453128 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-05 00:50:29.453132 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-05 00:50:29.453136 | orchestrator | 2026-04-05 00:50:29.453141 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-05 00:50:29.453145 | orchestrator | 2026-04-05 00:50:29.453149 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-05 00:50:29.453164 | orchestrator | Sunday 05 April 2026 00:48:41 +0000 (0:00:02.746) 0:00:03.820 ********** 2026-04-05 00:50:29.453170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:29.453182 | orchestrator | 2026-04-05 00:50:29.453187 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-05 00:50:29.453191 | orchestrator | Sunday 05 April 2026 00:48:44 +0000 (0:00:02.958) 0:00:06.778 ********** 2026-04-05 00:50:29.453196 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:29.453201 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.453205 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:29.453209 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:29.453214 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:29.453218 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:29.453222 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:29.453226 | orchestrator | 2026-04-05 00:50:29.453231 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-05 00:50:29.453238 | orchestrator | Sunday 05 April 2026 00:48:50 +0000 (0:00:06.463) 0:00:13.242 ********** 2026-04-05 00:50:29.453245 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:29.453253 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:29.453260 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.453267 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:29.453274 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:29.453282 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:29.453289 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:29.453296 | orchestrator | 2026-04-05 00:50:29.453309 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-05 00:50:29.453314 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:03.175) 0:00:16.417 ********** 2026-04-05 00:50:29.453318 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:29.453323 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:29.453327 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:29.453331 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453336 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:29.453340 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:29.453344 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:29.453348 | orchestrator | 2026-04-05 00:50:29.453353 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-05 00:50:29.453361 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:03.182) 0:00:19.600 ********** 2026-04-05 00:50:29.453368 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:29.453375 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:29.453382 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:29.453390 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:29.453398 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:29.453405 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:29.453412 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453418 | orchestrator | 2026-04-05 00:50:29.453425 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-05 00:50:29.453432 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:13.361) 0:00:32.962 ********** 2026-04-05 00:50:29.453440 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:29.453490 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:29.453537 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:29.453543 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:29.453549 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:29.453554 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:29.453559 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453564 | orchestrator | 2026-04-05 00:50:29.453569 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-05 00:50:29.453574 | orchestrator | Sunday 05 April 2026 00:49:58 +0000 (0:00:47.988) 0:01:20.950 ********** 2026-04-05 00:50:29.453581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:29.453587 | orchestrator | 2026-04-05 00:50:29.453592 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-05 00:50:29.453597 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:02.124) 0:01:23.075 ********** 2026-04-05 00:50:29.453605 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-05 00:50:29.453613 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-05 00:50:29.453622 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-05 00:50:29.453630 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-05 00:50:29.453639 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-05 00:50:29.453646 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-05 00:50:29.453656 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-05 00:50:29.453661 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-05 00:50:29.453668 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-05 00:50:29.453676 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-05 00:50:29.453683 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-05 00:50:29.453690 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-05 00:50:29.453698 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-05 00:50:29.453706 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-05 00:50:29.453714 | orchestrator | 2026-04-05 00:50:29.453721 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-05 00:50:29.453727 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:04.852) 0:01:27.927 ********** 2026-04-05 00:50:29.453734 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.453742 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:29.453750 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:29.453758 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:29.453765 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:29.453778 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:29.453785 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:29.453790 | orchestrator | 2026-04-05 00:50:29.453796 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-05 00:50:29.453801 | orchestrator | Sunday 05 April 2026 00:50:06 +0000 (0:00:01.624) 0:01:29.552 ********** 2026-04-05 00:50:29.453807 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:29.453812 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453817 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:29.453822 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:29.453827 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:29.453832 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:29.453837 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:29.453843 | orchestrator | 2026-04-05 00:50:29.453848 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-05 00:50:29.453853 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:01.487) 0:01:31.039 ********** 2026-04-05 00:50:29.453864 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.453869 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:29.453874 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:29.453880 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:29.453884 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:29.453888 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:29.453893 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:29.453897 | orchestrator | 2026-04-05 00:50:29.453901 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-05 00:50:29.453906 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:02.139) 0:01:33.178 ********** 2026-04-05 00:50:29.453911 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:29.453915 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:29.453925 | orchestrator | ok: [testbed-manager] 2026-04-05 00:50:29.453930 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:29.453934 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:29.453939 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:29.453943 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:29.453947 | orchestrator | 2026-04-05 00:50:29.453952 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-05 00:50:29.453956 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:01.660) 0:01:34.839 ********** 2026-04-05 00:50:29.453961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-05 00:50:29.453968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:29.453973 | orchestrator | 2026-04-05 00:50:29.453977 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-05 00:50:29.453982 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:01.618) 0:01:36.457 ********** 2026-04-05 00:50:29.453986 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.453990 | orchestrator | 2026-04-05 00:50:29.453995 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-05 00:50:29.453999 | orchestrator | Sunday 05 April 2026 00:50:16 +0000 (0:00:02.732) 0:01:39.190 ********** 2026-04-05 00:50:29.454003 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:29.454008 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:29.454013 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:29.454062 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:29.454069 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:29.454077 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:29.454085 | orchestrator | changed: [testbed-manager] 2026-04-05 00:50:29.454092 | orchestrator | 2026-04-05 00:50:29.454099 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:29.454104 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454109 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454113 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454118 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454122 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454127 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454131 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:50:29.454140 | orchestrator | 2026-04-05 00:50:29.454145 | orchestrator | 2026-04-05 00:50:29.454149 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:29.454154 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:11.828) 0:01:51.018 ********** 2026-04-05 00:50:29.454158 | orchestrator | =============================================================================== 2026-04-05 00:50:29.454162 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 47.99s 2026-04-05 00:50:29.454167 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.36s 2026-04-05 00:50:29.454171 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.83s 2026-04-05 00:50:29.454176 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 6.46s 2026-04-05 00:50:29.454184 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.85s 2026-04-05 00:50:29.454188 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.18s 2026-04-05 00:50:29.454192 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.18s 2026-04-05 00:50:29.454197 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.96s 2026-04-05 00:50:29.454201 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.75s 2026-04-05 00:50:29.454206 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.73s 2026-04-05 00:50:29.454210 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.14s 2026-04-05 00:50:29.454214 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.12s 2026-04-05 00:50:29.454219 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.66s 2026-04-05 00:50:29.454223 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.62s 2026-04-05 00:50:29.454228 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.62s 2026-04-05 00:50:29.454232 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.49s 2026-04-05 00:50:29.454311 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:29.456883 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:29.457624 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:29.457705 | orchestrator | 2026-04-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:32.652465 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:32.653114 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:32.654129 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:32.655704 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:32.657854 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state STARTED 2026-04-05 00:50:32.658838 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:32.659002 | orchestrator | 2026-04-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:35.695444 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:35.697137 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:35.698822 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:35.701316 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:35.703344 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state STARTED 2026-04-05 00:50:35.706316 | orchestrator | 2026-04-05 00:50:35.706445 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 2ed3ec56-6a01-4160-a7f4-ecfae6cdb688 is in state SUCCESS 2026-04-05 00:50:35.706996 | orchestrator | 2026-04-05 00:50:35.707058 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:35.707069 | orchestrator | 2026-04-05 00:50:35.707077 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:50:35.707084 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.844) 0:00:00.844 ********** 2026-04-05 00:50:35.707091 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:35.707098 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:35.707105 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:35.707111 | orchestrator | 2026-04-05 00:50:35.707118 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:35.707124 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:01.107) 0:00:01.951 ********** 2026-04-05 00:50:35.707131 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-05 00:50:35.707138 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-05 00:50:35.707145 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-05 00:50:35.707152 | orchestrator | 2026-04-05 00:50:35.707158 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-05 00:50:35.707165 | orchestrator | 2026-04-05 00:50:35.707171 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-05 00:50:35.707178 | orchestrator | Sunday 05 April 2026 00:50:15 +0000 (0:00:01.614) 0:00:03.566 ********** 2026-04-05 00:50:35.707184 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:50:35.707191 | orchestrator | 2026-04-05 00:50:35.707211 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-05 00:50:35.707217 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:02.054) 0:00:05.620 ********** 2026-04-05 00:50:35.707224 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:50:35.707230 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:50:35.707237 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:50:35.707243 | orchestrator | 2026-04-05 00:50:35.707250 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-05 00:50:35.707256 | orchestrator | Sunday 05 April 2026 00:50:20 +0000 (0:00:02.666) 0:00:08.286 ********** 2026-04-05 00:50:35.707262 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:50:35.707269 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:50:35.707276 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:50:35.707282 | orchestrator | 2026-04-05 00:50:35.707289 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-05 00:50:35.707309 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:02.742) 0:00:11.029 ********** 2026-04-05 00:50:35.707326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:50:35.707369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:50:35.707389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:50:35.707397 | orchestrator | 2026-04-05 00:50:35.707404 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-05 00:50:35.707418 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:02.713) 0:00:13.742 ********** 2026-04-05 00:50:35.707425 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:50:35.707432 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:35.707439 | orchestrator | } 2026-04-05 00:50:35.707445 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:50:35.707452 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:35.707459 | orchestrator | } 2026-04-05 00:50:35.707466 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:50:35.707472 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:35.707478 | orchestrator | } 2026-04-05 00:50:35.707485 | orchestrator | 2026-04-05 00:50:35.707492 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:50:35.707516 | orchestrator | Sunday 05 April 2026 00:50:26 +0000 (0:00:00.607) 0:00:14.350 ********** 2026-04-05 00:50:35.707528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:50:35.707536 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:35.707542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:50:35.707555 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:35.707562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:50:35.707569 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:35.707576 | orchestrator | 2026-04-05 00:50:35.707585 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-05 00:50:35.707592 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:02.696) 0:00:17.047 ********** 2026-04-05 00:50:35.707613 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wmgt78g3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wmgt78g3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_wmgt78g3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wmgt78g3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:35.707630 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6elsrdf5/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6elsrdf5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6elsrdf5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6elsrdf5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:35.707651 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_l1c6htjt/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_l1c6htjt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_l1c6htjt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_l1c6htjt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:35.707666 | orchestrator | 2026-04-05 00:50:35.707674 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:35.707682 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:35.707691 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:35.707699 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:35.707707 | orchestrator | 2026-04-05 00:50:35.707715 | orchestrator | 2026-04-05 00:50:35.707722 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:35.707730 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:02.551) 0:00:19.598 ********** 2026-04-05 00:50:35.707737 | orchestrator | =============================================================================== 2026-04-05 00:50:35.707744 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.74s 2026-04-05 00:50:35.707752 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.71s 2026-04-05 00:50:35.707759 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.70s 2026-04-05 00:50:35.707767 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.67s 2026-04-05 00:50:35.707775 | orchestrator | memcached : Restart memcached container --------------------------------- 2.55s 2026-04-05 00:50:35.707782 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.05s 2026-04-05 00:50:35.707789 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.61s 2026-04-05 00:50:35.707797 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2026-04-05 00:50:35.707804 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.61s 2026-04-05 00:50:35.708344 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:35.708615 | orchestrator | 2026-04-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:38.755185 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:38.756423 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:38.758265 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:38.759444 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:38.762396 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 8d2ba09e-dd89-47f3-9f00-bf1f95288970 is in state SUCCESS 2026-04-05 00:50:38.763552 | orchestrator | 2026-04-05 00:50:38.763593 | orchestrator | 2026-04-05 00:50:38.763600 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:38.763630 | orchestrator | 2026-04-05 00:50:38.763638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:50:38.763651 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.844) 0:00:00.844 ********** 2026-04-05 00:50:38.763655 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:38.763660 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:38.763664 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:38.763668 | orchestrator | 2026-04-05 00:50:38.763672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:38.763685 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:00.877) 0:00:01.722 ********** 2026-04-05 00:50:38.763689 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-05 00:50:38.763694 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-05 00:50:38.763697 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-05 00:50:38.763701 | orchestrator | 2026-04-05 00:50:38.763705 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-05 00:50:38.763709 | orchestrator | 2026-04-05 00:50:38.763713 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-05 00:50:38.763717 | orchestrator | Sunday 05 April 2026 00:50:14 +0000 (0:00:00.887) 0:00:02.610 ********** 2026-04-05 00:50:38.763721 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-05 00:50:38.763747 | orchestrator | 2026-04-05 00:50:38.763758 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-05 00:50:38.763762 | orchestrator | Sunday 05 April 2026 00:50:15 +0000 (0:00:01.764) 0:00:04.375 ********** 2026-04-05 00:50:38.763768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763860 | orchestrator | 2026-04-05 00:50:38.763864 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-05 00:50:38.763868 | orchestrator | Sunday 05 April 2026 00:50:19 +0000 (0:00:03.719) 0:00:08.094 ********** 2026-04-05 00:50:38.763872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763906 | orchestrator | 2026-04-05 00:50:38.763910 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-05 00:50:38.763914 | orchestrator | Sunday 05 April 2026 00:50:22 +0000 (0:00:03.182) 0:00:11.276 ********** 2026-04-05 00:50:38.763918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763951 | orchestrator | 2026-04-05 00:50:38.763955 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-05 00:50:38.763959 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:04.753) 0:00:16.029 ********** 2026-04-05 00:50:38.763978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.763999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.764012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:50:38.764019 | orchestrator | 2026-04-05 00:50:38.764042 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-05 00:50:38.764050 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:02.737) 0:00:18.767 ********** 2026-04-05 00:50:38.764056 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:50:38.764063 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:38.764202 | orchestrator | } 2026-04-05 00:50:38.764207 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:50:38.764212 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:38.764217 | orchestrator | } 2026-04-05 00:50:38.764222 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:50:38.764226 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:38.764230 | orchestrator | } 2026-04-05 00:50:38.764234 | orchestrator | 2026-04-05 00:50:38.764246 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:50:38.764250 | orchestrator | Sunday 05 April 2026 00:50:32 +0000 (0:00:02.289) 0:00:21.056 ********** 2026-04-05 00:50:38.764255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764275 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:38.764280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:38.764295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 00:50:38.764309 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:38.764313 | orchestrator | 2026-04-05 00:50:38.764317 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:50:38.764322 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:01.155) 0:00:22.212 ********** 2026-04-05 00:50:38.764326 | orchestrator | 2026-04-05 00:50:38.764330 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:50:38.764335 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:00.091) 0:00:22.303 ********** 2026-04-05 00:50:38.764339 | orchestrator | 2026-04-05 00:50:38.764344 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:50:38.764348 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:00.084) 0:00:22.387 ********** 2026-04-05 00:50:38.764358 | orchestrator | 2026-04-05 00:50:38.764363 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-05 00:50:38.764368 | orchestrator | Sunday 05 April 2026 00:50:34 +0000 (0:00:00.100) 0:00:22.488 ********** 2026-04-05 00:50:38.764378 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a47rkcc9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a47rkcc9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_a47rkcc9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a47rkcc9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:38.764393 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hpkdzepk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hpkdzepk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_hpkdzepk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hpkdzepk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:38.764412 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a915k372/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a915k372/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_a915k372/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a915k372/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:38.764420 | orchestrator | 2026-04-05 00:50:38.764427 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:38.764436 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:38.764442 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:38.764452 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:50:38.764457 | orchestrator | 2026-04-05 00:50:38.764461 | orchestrator | 2026-04-05 00:50:38.764466 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:38.764470 | orchestrator | Sunday 05 April 2026 00:50:36 +0000 (0:00:02.878) 0:00:25.367 ********** 2026-04-05 00:50:38.764475 | orchestrator | =============================================================================== 2026-04-05 00:50:38.764479 | orchestrator | redis : Copying over redis config files --------------------------------- 4.75s 2026-04-05 00:50:38.764483 | orchestrator | redis : Ensuring config directories exist ------------------------------- 3.72s 2026-04-05 00:50:38.764488 | orchestrator | redis : Copying over default config.json files -------------------------- 3.18s 2026-04-05 00:50:38.764530 | orchestrator | redis : Restart redis container ----------------------------------------- 2.88s 2026-04-05 00:50:38.764537 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.74s 2026-04-05 00:50:38.764544 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 2.29s 2026-04-05 00:50:38.764551 | orchestrator | redis : include_tasks --------------------------------------------------- 1.76s 2026-04-05 00:50:38.764556 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.16s 2026-04-05 00:50:38.764563 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-04-05 00:50:38.764569 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-04-05 00:50:38.764577 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2026-04-05 00:50:38.764678 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:38.764686 | orchestrator | 2026-04-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:41.856211 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:41.858362 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:41.859900 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:41.860202 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:41.862235 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:41.862352 | orchestrator | 2026-04-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:44.903356 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:44.903913 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:44.905398 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:44.906149 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:44.907096 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:44.907131 | orchestrator | 2026-04-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:47.936653 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:47.939102 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:47.940986 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:47.943804 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:47.945311 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:47.945710 | orchestrator | 2026-04-05 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:50.985041 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:50.986417 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:50.988338 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state STARTED 2026-04-05 00:50:50.990147 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:50.992791 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:50.992963 | orchestrator | 2026-04-05 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:54.057093 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:54.060943 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:50:54.065581 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:54.070775 | orchestrator | 2026-04-05 00:50:54.070858 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task cdbcdf8a-59f7-4374-ae1c-5733da8039df is in state SUCCESS 2026-04-05 00:50:54.075385 | orchestrator | 2026-04-05 00:50:54.075459 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:54.075473 | orchestrator | 2026-04-05 00:50:54.075485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:50:54.075574 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.939) 0:00:00.942 ********** 2026-04-05 00:50:54.075584 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:54.075592 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:54.075600 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:54.075608 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:54.075615 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:54.075622 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:54.075630 | orchestrator | 2026-04-05 00:50:54.075637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:54.075645 | orchestrator | Sunday 05 April 2026 00:50:14 +0000 (0:00:01.840) 0:00:02.783 ********** 2026-04-05 00:50:54.075653 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075660 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075668 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075675 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075682 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075689 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:54.075697 | orchestrator | 2026-04-05 00:50:54.075704 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-05 00:50:54.075711 | orchestrator | 2026-04-05 00:50:54.075719 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-05 00:50:54.075726 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:03.265) 0:00:06.048 ********** 2026-04-05 00:50:54.075760 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:50:54.075778 | orchestrator | 2026-04-05 00:50:54.075797 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:50:54.075809 | orchestrator | Sunday 05 April 2026 00:50:20 +0000 (0:00:02.408) 0:00:08.456 ********** 2026-04-05 00:50:54.075822 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:50:54.075834 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:50:54.075847 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:50:54.075858 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:50:54.075869 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:50:54.075894 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:50:54.075907 | orchestrator | 2026-04-05 00:50:54.075918 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:50:54.075932 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:02.692) 0:00:11.149 ********** 2026-04-05 00:50:54.075944 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:50:54.075955 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:50:54.075967 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:50:54.075980 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:50:54.075992 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:50:54.076018 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:50:54.076031 | orchestrator | 2026-04-05 00:50:54.076039 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:50:54.076046 | orchestrator | Sunday 05 April 2026 00:50:26 +0000 (0:00:03.615) 0:00:14.765 ********** 2026-04-05 00:50:54.076053 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-05 00:50:54.076061 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:54.076068 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-05 00:50:54.076120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:54.076132 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-05 00:50:54.076144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:54.076156 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-05 00:50:54.076168 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:54.076181 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-05 00:50:54.076189 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:54.076197 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-05 00:50:54.076204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:54.076212 | orchestrator | 2026-04-05 00:50:54.076219 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-05 00:50:54.076226 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:02.138) 0:00:16.903 ********** 2026-04-05 00:50:54.076234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:54.076241 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:54.076248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:54.076255 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:54.076262 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:54.076269 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:54.076276 | orchestrator | 2026-04-05 00:50:54.076284 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-05 00:50:54.076291 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:01.741) 0:00:18.644 ********** 2026-04-05 00:50:54.076320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076567 | orchestrator | 2026-04-05 00:50:54.076575 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-05 00:50:54.076583 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:03.179) 0:00:21.824 ********** 2026-04-05 00:50:54.076590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076774 | orchestrator | 2026-04-05 00:50:54.076787 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-05 00:50:54.076800 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:04.418) 0:00:26.242 ********** 2026-04-05 00:50:54.076812 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:54.076823 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:54.076831 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:54.076838 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:54.076845 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:54.076852 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:54.076859 | orchestrator | 2026-04-05 00:50:54.076866 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-05 00:50:54.076874 | orchestrator | Sunday 05 April 2026 00:50:39 +0000 (0:00:01.332) 0:00:27.575 ********** 2026-04-05 00:50:54.076881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.076987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.077008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.077020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.077033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:54.077045 | orchestrator | 2026-04-05 00:50:54.077097 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-05 00:50:54.077140 | orchestrator | Sunday 05 April 2026 00:50:43 +0000 (0:00:03.892) 0:00:31.467 ********** 2026-04-05 00:50:54.077153 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:50:54.077161 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077169 | orchestrator | } 2026-04-05 00:50:54.077176 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:50:54.077183 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077191 | orchestrator | } 2026-04-05 00:50:54.077198 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:50:54.077205 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077253 | orchestrator | } 2026-04-05 00:50:54.077260 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:50:54.077267 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077274 | orchestrator | } 2026-04-05 00:50:54.077282 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:50:54.077289 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077296 | orchestrator | } 2026-04-05 00:50:54.077303 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:50:54.077310 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:50:54.077323 | orchestrator | } 2026-04-05 00:50:54.077330 | orchestrator | 2026-04-05 00:50:54.077337 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:50:54.077345 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:01.442) 0:00:32.910 ********** 2026-04-05 00:50:54.077353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077392 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:54.077399 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:54.077407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077432 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:54.077439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077461 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:54.077468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077488 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:54.077525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 00:50:54.077533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 00:50:54.077541 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:54.077548 | orchestrator | 2026-04-05 00:50:54.077555 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077562 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:02.583) 0:00:35.494 ********** 2026-04-05 00:50:54.077570 | orchestrator | 2026-04-05 00:50:54.077577 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077584 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.129) 0:00:35.623 ********** 2026-04-05 00:50:54.077591 | orchestrator | 2026-04-05 00:50:54.077598 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077605 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.242) 0:00:35.866 ********** 2026-04-05 00:50:54.077612 | orchestrator | 2026-04-05 00:50:54.077619 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077626 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.128) 0:00:35.994 ********** 2026-04-05 00:50:54.077633 | orchestrator | 2026-04-05 00:50:54.077645 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077653 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:00.134) 0:00:36.129 ********** 2026-04-05 00:50:54.077660 | orchestrator | 2026-04-05 00:50:54.077667 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:54.077674 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:00.123) 0:00:36.252 ********** 2026-04-05 00:50:54.077714 | orchestrator | 2026-04-05 00:50:54.077723 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-05 00:50:54.077730 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:00.142) 0:00:36.395 ********** 2026-04-05 00:50:54.077745 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4rl56u1h/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4rl56u1h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_4rl56u1h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4rl56u1h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077771 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ufuvfxkp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ufuvfxkp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ufuvfxkp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ufuvfxkp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077790 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_0eozy3bk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_0eozy3bk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_0eozy3bk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_0eozy3bk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077807 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_w4fhlugs/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_w4fhlugs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_w4fhlugs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_w4fhlugs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077831 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_zbxewglm/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_zbxewglm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_zbxewglm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_zbxewglm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077845 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_l02p2ma4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_l02p2ma4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_l02p2ma4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_l02p2ma4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:50:54.077858 | orchestrator | 2026-04-05 00:50:54.077865 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:54.077873 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077881 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077888 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077895 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077903 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077910 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:50:54.077917 | orchestrator | 2026-04-05 00:50:54.077924 | orchestrator | 2026-04-05 00:50:54.077935 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:54.077943 | orchestrator | Sunday 05 April 2026 00:50:51 +0000 (0:00:03.424) 0:00:39.820 ********** 2026-04-05 00:50:54.077950 | orchestrator | =============================================================================== 2026-04-05 00:50:54.077957 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.42s 2026-04-05 00:50:54.077972 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.89s 2026-04-05 00:50:54.077980 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.61s 2026-04-05 00:50:54.077987 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 3.42s 2026-04-05 00:50:54.077994 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.26s 2026-04-05 00:50:54.078001 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.18s 2026-04-05 00:50:54.078008 | orchestrator | module-load : Load modules ---------------------------------------------- 2.69s 2026-04-05 00:50:54.078179 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.58s 2026-04-05 00:50:54.078195 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.41s 2026-04-05 00:50:54.078205 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.14s 2026-04-05 00:50:54.078213 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.84s 2026-04-05 00:50:54.078220 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.74s 2026-04-05 00:50:54.078228 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.44s 2026-04-05 00:50:54.078236 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.33s 2026-04-05 00:50:54.078243 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.90s 2026-04-05 00:50:54.078255 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:54.082093 | orchestrator | 2026-04-05 00:50:54 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:54.082188 | orchestrator | 2026-04-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:57.122381 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:50:57.125147 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:50:57.128992 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:50:57.130603 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:50:57.131838 | orchestrator | 2026-04-05 00:50:57 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:50:57.131862 | orchestrator | 2026-04-05 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:00.192383 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:00.193279 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:00.195649 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:00.196613 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:00.198103 | orchestrator | 2026-04-05 00:51:00 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:00.198147 | orchestrator | 2026-04-05 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:03.244384 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:03.244486 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:03.244610 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:03.244623 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:03.244633 | orchestrator | 2026-04-05 00:51:03 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:03.244643 | orchestrator | 2026-04-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:06.288330 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:06.289353 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:06.291835 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:06.292863 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:06.295308 | orchestrator | 2026-04-05 00:51:06 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:06.295336 | orchestrator | 2026-04-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:09.345419 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:09.345544 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:09.345554 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:09.348671 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:09.351026 | orchestrator | 2026-04-05 00:51:09 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:09.351231 | orchestrator | 2026-04-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:12.393853 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:12.397305 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:12.398584 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:12.400077 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:12.400535 | orchestrator | 2026-04-05 00:51:12 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:12.402324 | orchestrator | 2026-04-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:15.449954 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:15.453215 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:15.454767 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:15.456379 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:15.460097 | orchestrator | 2026-04-05 00:51:15 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:15.460229 | orchestrator | 2026-04-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:18.506791 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:18.509070 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state STARTED 2026-04-05 00:51:18.509850 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:18.511215 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:18.512841 | orchestrator | 2026-04-05 00:51:18 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:18.512881 | orchestrator | 2026-04-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:21.559572 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:21.562737 | orchestrator | 2026-04-05 00:51:21.562885 | orchestrator | 2026-04-05 00:51:21.562903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:51:21.562925 | orchestrator | 2026-04-05 00:51:21.562966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:51:21.562985 | orchestrator | Sunday 05 April 2026 00:50:57 +0000 (0:00:00.467) 0:00:00.467 ********** 2026-04-05 00:51:21.563020 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:21.563042 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:21.563063 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:21.563084 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:21.563103 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:21.563123 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:21.563142 | orchestrator | 2026-04-05 00:51:21.563160 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:51:21.563178 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:01.486) 0:00:01.953 ********** 2026-04-05 00:51:21.563197 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-05 00:51:21.563216 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-05 00:51:21.563233 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-05 00:51:21.563251 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-05 00:51:21.563269 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-05 00:51:21.563287 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-05 00:51:21.563306 | orchestrator | 2026-04-05 00:51:21.563325 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-05 00:51:21.563345 | orchestrator | 2026-04-05 00:51:21.563363 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-05 00:51:21.563382 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:01.695) 0:00:03.648 ********** 2026-04-05 00:51:21.563400 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:51:21.563421 | orchestrator | 2026-04-05 00:51:21.563439 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-05 00:51:21.563458 | orchestrator | Sunday 05 April 2026 00:51:03 +0000 (0:00:02.854) 0:00:06.503 ********** 2026-04-05 00:51:21.563510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563688 | orchestrator | 2026-04-05 00:51:21.563708 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-05 00:51:21.563720 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:03.938) 0:00:10.441 ********** 2026-04-05 00:51:21.563731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563865 | orchestrator | 2026-04-05 00:51:21.563876 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-05 00:51:21.563887 | orchestrator | Sunday 05 April 2026 00:51:09 +0000 (0:00:02.157) 0:00:12.599 ********** 2026-04-05 00:51:21.563898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.563999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564051 | orchestrator | 2026-04-05 00:51:21.564071 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-05 00:51:21.564092 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:02.081) 0:00:14.681 ********** 2026-04-05 00:51:21.564111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564246 | orchestrator | 2026-04-05 00:51:21.564263 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-05 00:51:21.564281 | orchestrator | Sunday 05 April 2026 00:51:13 +0000 (0:00:01.794) 0:00:16.475 ********** 2026-04-05 00:51:21.564299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:51:21.564420 | orchestrator | 2026-04-05 00:51:21.564438 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-05 00:51:21.564456 | orchestrator | Sunday 05 April 2026 00:51:15 +0000 (0:00:02.296) 0:00:18.771 ********** 2026-04-05 00:51:21.564474 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:51:21.564527 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564548 | orchestrator | } 2026-04-05 00:51:21.564566 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:51:21.564584 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564603 | orchestrator | } 2026-04-05 00:51:21.564619 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:51:21.564636 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564654 | orchestrator | } 2026-04-05 00:51:21.564671 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:51:21.564689 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564707 | orchestrator | } 2026-04-05 00:51:21.564724 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:51:21.564742 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564760 | orchestrator | } 2026-04-05 00:51:21.564778 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:51:21.564796 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:21.564815 | orchestrator | } 2026-04-05 00:51:21.564833 | orchestrator | 2026-04-05 00:51:21.564851 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:51:21.564885 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.840) 0:00:19.611 ********** 2026-04-05 00:51:21.564915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.564950 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:21.564969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.564987 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:21.565006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.565025 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:21.565043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.565062 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:21.565079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.565097 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:21.565114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:51:21.565133 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:21.565150 | orchestrator | 2026-04-05 00:51:21.565168 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-05 00:51:21.565187 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:02.274) 0:00:21.888 ********** 2026-04-05 00:51:21.565204 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565222 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565240 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565258 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565277 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565294 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:51:21.565332 | orchestrator | 2026-04-05 00:51:21.565351 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:51:21.565385 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565416 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565436 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565454 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565473 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565570 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-05 00:51:21.565589 | orchestrator | 2026-04-05 00:51:21.565601 | orchestrator | 2026-04-05 00:51:21.565613 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:51:21.565624 | orchestrator | Sunday 05 April 2026 00:51:20 +0000 (0:00:01.547) 0:00:23.436 ********** 2026-04-05 00:51:21.565636 | orchestrator | =============================================================================== 2026-04-05 00:51:21.565647 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 3.94s 2026-04-05 00:51:21.565658 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.85s 2026-04-05 00:51:21.565669 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.30s 2026-04-05 00:51:21.565681 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.27s 2026-04-05 00:51:21.565692 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.16s 2026-04-05 00:51:21.565703 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.08s 2026-04-05 00:51:21.565714 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.79s 2026-04-05 00:51:21.565725 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.70s 2026-04-05 00:51:21.565736 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.55s 2026-04-05 00:51:21.565747 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.49s 2026-04-05 00:51:21.565759 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.84s 2026-04-05 00:51:21.565774 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task f3e14900-9265-4dc6-8fa5-080bed750a77 is in state SUCCESS 2026-04-05 00:51:21.565794 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:21.566007 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:21.566121 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:21.566142 | orchestrator | 2026-04-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:24.591819 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:24.593177 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state STARTED 2026-04-05 00:51:24.594501 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:24.595647 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:24.595671 | orchestrator | 2026-04-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:27.636848 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:27.638173 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task ef10bf1f-2745-40a9-8109-6e118b2f5772 is in state SUCCESS 2026-04-05 00:51:27.640040 | orchestrator | 2026-04-05 00:51:27.640106 | orchestrator | 2026-04-05 00:51:27.640119 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-05 00:51:27.640129 | orchestrator | 2026-04-05 00:51:27.640138 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 00:51:27.640148 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:00.551) 0:00:00.551 ********** 2026-04-05 00:51:27.640157 | orchestrator | ok: [localhost] => { 2026-04-05 00:51:27.640168 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-05 00:51:27.640178 | orchestrator | } 2026-04-05 00:51:27.640187 | orchestrator | 2026-04-05 00:51:27.640196 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-05 00:51:27.640205 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:00.118) 0:00:00.670 ********** 2026-04-05 00:51:27.640215 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-05 00:51:27.640225 | orchestrator | ...ignoring 2026-04-05 00:51:27.640234 | orchestrator | 2026-04-05 00:51:27.640258 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-05 00:51:27.640267 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:04.730) 0:00:05.401 ********** 2026-04-05 00:51:27.640276 | orchestrator | skipping: [localhost] 2026-04-05 00:51:27.640285 | orchestrator | 2026-04-05 00:51:27.640294 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-05 00:51:27.640302 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:00.148) 0:00:05.549 ********** 2026-04-05 00:51:27.640311 | orchestrator | ok: [localhost] 2026-04-05 00:51:27.640320 | orchestrator | 2026-04-05 00:51:27.640329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:51:27.640338 | orchestrator | 2026-04-05 00:51:27.640347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:51:27.640356 | orchestrator | Sunday 05 April 2026 00:50:46 +0000 (0:00:00.475) 0:00:06.024 ********** 2026-04-05 00:51:27.640365 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:27.640379 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:27.640393 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:27.640407 | orchestrator | 2026-04-05 00:51:27.640421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:51:27.640436 | orchestrator | Sunday 05 April 2026 00:50:46 +0000 (0:00:00.547) 0:00:06.571 ********** 2026-04-05 00:51:27.640451 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-05 00:51:27.640466 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-05 00:51:27.640554 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-05 00:51:27.640565 | orchestrator | 2026-04-05 00:51:27.640574 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-05 00:51:27.640583 | orchestrator | 2026-04-05 00:51:27.640592 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:51:27.640601 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.973) 0:00:07.545 ********** 2026-04-05 00:51:27.640610 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:51:27.640644 | orchestrator | 2026-04-05 00:51:27.640659 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:51:27.640673 | orchestrator | Sunday 05 April 2026 00:50:50 +0000 (0:00:02.372) 0:00:09.918 ********** 2026-04-05 00:51:27.640687 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:27.640701 | orchestrator | 2026-04-05 00:51:27.640715 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-05 00:51:27.640728 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:01.813) 0:00:11.731 ********** 2026-04-05 00:51:27.640743 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.640759 | orchestrator | 2026-04-05 00:51:27.640775 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-05 00:51:27.640784 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:00.393) 0:00:12.125 ********** 2026-04-05 00:51:27.640793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.640801 | orchestrator | 2026-04-05 00:51:27.640837 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-05 00:51:27.640847 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:00.329) 0:00:12.455 ********** 2026-04-05 00:51:27.640856 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.640868 | orchestrator | 2026-04-05 00:51:27.640883 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-05 00:51:27.640899 | orchestrator | Sunday 05 April 2026 00:50:53 +0000 (0:00:00.361) 0:00:12.816 ********** 2026-04-05 00:51:27.640913 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.640946 | orchestrator | 2026-04-05 00:51:27.640961 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:51:27.640974 | orchestrator | Sunday 05 April 2026 00:50:53 +0000 (0:00:00.331) 0:00:13.147 ********** 2026-04-05 00:51:27.640983 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:51:27.640992 | orchestrator | 2026-04-05 00:51:27.641001 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:51:27.641009 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.949) 0:00:14.097 ********** 2026-04-05 00:51:27.641019 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:27.641034 | orchestrator | 2026-04-05 00:51:27.641049 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-05 00:51:27.641064 | orchestrator | Sunday 05 April 2026 00:50:55 +0000 (0:00:00.943) 0:00:15.041 ********** 2026-04-05 00:51:27.641080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.641094 | orchestrator | 2026-04-05 00:51:27.641107 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-05 00:51:27.641116 | orchestrator | Sunday 05 April 2026 00:50:56 +0000 (0:00:01.473) 0:00:16.514 ********** 2026-04-05 00:51:27.641125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.641134 | orchestrator | 2026-04-05 00:51:27.641158 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-05 00:51:27.641167 | orchestrator | Sunday 05 April 2026 00:50:57 +0000 (0:00:00.367) 0:00:16.881 ********** 2026-04-05 00:51:27.641190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641279 | orchestrator | 2026-04-05 00:51:27.641289 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-05 00:51:27.641298 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:01.771) 0:00:18.652 ********** 2026-04-05 00:51:27.641317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641382 | orchestrator | 2026-04-05 00:51:27.641391 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-05 00:51:27.641400 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:01.952) 0:00:20.605 ********** 2026-04-05 00:51:27.641409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:27.641418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:27.641426 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:51:27.641435 | orchestrator | 2026-04-05 00:51:27.641444 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-05 00:51:27.641453 | orchestrator | Sunday 05 April 2026 00:51:03 +0000 (0:00:02.326) 0:00:22.931 ********** 2026-04-05 00:51:27.641462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:51:27.641471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:51:27.641525 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:51:27.641542 | orchestrator | 2026-04-05 00:51:27.641557 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-05 00:51:27.641566 | orchestrator | Sunday 05 April 2026 00:51:08 +0000 (0:00:05.043) 0:00:27.975 ********** 2026-04-05 00:51:27.641575 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:51:27.641584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:51:27.641593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:51:27.641602 | orchestrator | 2026-04-05 00:51:27.641618 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-05 00:51:27.641627 | orchestrator | Sunday 05 April 2026 00:51:10 +0000 (0:00:01.932) 0:00:29.908 ********** 2026-04-05 00:51:27.641636 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:51:27.641654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:51:27.641663 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:51:27.641672 | orchestrator | 2026-04-05 00:51:27.641680 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-05 00:51:27.641690 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:01.756) 0:00:31.664 ********** 2026-04-05 00:51:27.641699 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:51:27.641708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:51:27.641797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:51:27.641810 | orchestrator | 2026-04-05 00:51:27.641819 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-05 00:51:27.641836 | orchestrator | Sunday 05 April 2026 00:51:13 +0000 (0:00:01.990) 0:00:33.654 ********** 2026-04-05 00:51:27.641846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:51:27.641854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:51:27.641863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:51:27.641872 | orchestrator | 2026-04-05 00:51:27.641881 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:51:27.641890 | orchestrator | Sunday 05 April 2026 00:51:15 +0000 (0:00:01.662) 0:00:35.317 ********** 2026-04-05 00:51:27.641899 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:51:27.641908 | orchestrator | 2026-04-05 00:51:27.641933 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-05 00:51:27.641943 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.933) 0:00:36.251 ********** 2026-04-05 00:51:27.641953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.641963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.642001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.642059 | orchestrator | 2026-04-05 00:51:27.642072 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-05 00:51:27.642081 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:01.549) 0:00:37.801 ********** 2026-04-05 00:51:27.642091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.642111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:27.642139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642155 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:27.642164 | orchestrator | 2026-04-05 00:51:27.642188 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-05 00:51:27.642198 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.661) 0:00:38.462 ********** 2026-04-05 00:51:27.642212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642233 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.642242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:27.642252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642267 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:27.642276 | orchestrator | 2026-04-05 00:51:27.642285 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-05 00:51:27.642294 | orchestrator | Sunday 05 April 2026 00:51:19 +0000 (0:00:01.213) 0:00:39.675 ********** 2026-04-05 00:51:27.642315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.642326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.642337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:51:27.642352 | orchestrator | 2026-04-05 00:51:27.642361 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-05 00:51:27.642386 | orchestrator | Sunday 05 April 2026 00:51:21 +0000 (0:00:01.295) 0:00:40.971 ********** 2026-04-05 00:51:27.642396 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:51:27.642405 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:27.642413 | orchestrator | } 2026-04-05 00:51:27.642422 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:51:27.642431 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:27.642439 | orchestrator | } 2026-04-05 00:51:27.642448 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:51:27.642456 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:51:27.642465 | orchestrator | } 2026-04-05 00:51:27.642497 | orchestrator | 2026-04-05 00:51:27.642508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:51:27.642517 | orchestrator | Sunday 05 April 2026 00:51:21 +0000 (0:00:00.359) 0:00:41.330 ********** 2026-04-05 00:51:27.642534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642654 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:27.642663 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:27.642673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:51:27.642690 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:27.642699 | orchestrator | 2026-04-05 00:51:27.642708 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-05 00:51:27.642717 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.914) 0:00:42.245 ********** 2026-04-05 00:51:27.642725 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:27.642734 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:27.642743 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:27.642751 | orchestrator | 2026-04-05 00:51:27.642760 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-05 00:51:27.642769 | orchestrator | Sunday 05 April 2026 00:51:23 +0000 (0:00:00.897) 0:00:43.142 ********** 2026-04-05 00:51:27.642860 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_od4jsdoh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_od4jsdoh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_od4jsdoh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:51:27.642874 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_j19l610a/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_j19l610a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_j19l610a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:51:27.642906 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_0o08i97d/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_0o08i97d/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_0o08i97d/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:51:27.642917 | orchestrator | 2026-04-05 00:51:27.642926 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:51:27.642941 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 00:51:27.642952 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-05 00:51:27.642962 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-05 00:51:27.642971 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-05 00:51:27.642980 | orchestrator | 2026-04-05 00:51:27.642989 | orchestrator | 2026-04-05 00:51:27.642998 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:51:27.643007 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:01.131) 0:00:44.273 ********** 2026-04-05 00:51:27.643015 | orchestrator | =============================================================================== 2026-04-05 00:51:27.643024 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 5.04s 2026-04-05 00:51:27.643033 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.73s 2026-04-05 00:51:27.643042 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.37s 2026-04-05 00:51:27.643051 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.33s 2026-04-05 00:51:27.643059 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.99s 2026-04-05 00:51:27.643069 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.95s 2026-04-05 00:51:27.643077 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.93s 2026-04-05 00:51:27.643086 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.81s 2026-04-05 00:51:27.643094 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.77s 2026-04-05 00:51:27.643104 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2026-04-05 00:51:27.643112 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.66s 2026-04-05 00:51:27.643122 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.55s 2026-04-05 00:51:27.643130 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.47s 2026-04-05 00:51:27.643139 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.30s 2026-04-05 00:51:27.643147 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.21s 2026-04-05 00:51:27.643161 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 1.13s 2026-04-05 00:51:27.643171 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-04-05 00:51:27.643179 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.95s 2026-04-05 00:51:27.643188 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.94s 2026-04-05 00:51:27.643197 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.93s 2026-04-05 00:51:27.643206 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:27.643215 | orchestrator | 2026-04-05 00:51:27 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:27.643224 | orchestrator | 2026-04-05 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:30.673023 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:30.675694 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:30.677166 | orchestrator | 2026-04-05 00:51:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:30.677216 | orchestrator | 2026-04-05 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:33.717539 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:33.718242 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:33.718567 | orchestrator | 2026-04-05 00:51:33 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:33.718781 | orchestrator | 2026-04-05 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:36.760933 | orchestrator | 2026-04-05 00:51:36 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:36.765008 | orchestrator | 2026-04-05 00:51:36 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:36.765060 | orchestrator | 2026-04-05 00:51:36 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:36.765069 | orchestrator | 2026-04-05 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:39.806574 | orchestrator | 2026-04-05 00:51:39 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:39.806683 | orchestrator | 2026-04-05 00:51:39 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:39.806699 | orchestrator | 2026-04-05 00:51:39 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:39.806711 | orchestrator | 2026-04-05 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:42.854950 | orchestrator | 2026-04-05 00:51:42 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:42.855533 | orchestrator | 2026-04-05 00:51:42 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:42.857121 | orchestrator | 2026-04-05 00:51:42 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:42.857168 | orchestrator | 2026-04-05 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:45.895672 | orchestrator | 2026-04-05 00:51:45 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:45.897492 | orchestrator | 2026-04-05 00:51:45 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:45.898752 | orchestrator | 2026-04-05 00:51:45 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:45.898786 | orchestrator | 2026-04-05 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:48.944511 | orchestrator | 2026-04-05 00:51:48 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:48.947437 | orchestrator | 2026-04-05 00:51:48 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:48.948905 | orchestrator | 2026-04-05 00:51:48 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:48.949288 | orchestrator | 2026-04-05 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:51.988337 | orchestrator | 2026-04-05 00:51:51 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:51.991431 | orchestrator | 2026-04-05 00:51:51 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:51.992676 | orchestrator | 2026-04-05 00:51:51 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:51.992754 | orchestrator | 2026-04-05 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:55.060207 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:55.060307 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:55.060341 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:55.060365 | orchestrator | 2026-04-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:58.130453 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:51:58.131248 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:51:58.132314 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:51:58.132685 | orchestrator | 2026-04-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:01.181939 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:01.182087 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:01.183526 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:01.183565 | orchestrator | 2026-04-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:04.230780 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:04.231418 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:04.234638 | orchestrator | 2026-04-05 00:52:04 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:04.234697 | orchestrator | 2026-04-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:07.284405 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:07.284908 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:07.285902 | orchestrator | 2026-04-05 00:52:07 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:07.285947 | orchestrator | 2026-04-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:10.323291 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:10.326369 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:10.332095 | orchestrator | 2026-04-05 00:52:10 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:10.332194 | orchestrator | 2026-04-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:13.372240 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:13.372582 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:13.373589 | orchestrator | 2026-04-05 00:52:13 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:13.373659 | orchestrator | 2026-04-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:16.425513 | orchestrator | 2026-04-05 00:52:16 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:16.426320 | orchestrator | 2026-04-05 00:52:16 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:16.427709 | orchestrator | 2026-04-05 00:52:16 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:16.427761 | orchestrator | 2026-04-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:19.525040 | orchestrator | 2026-04-05 00:52:19 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:19.525118 | orchestrator | 2026-04-05 00:52:19 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:19.525125 | orchestrator | 2026-04-05 00:52:19 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:19.525130 | orchestrator | 2026-04-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:22.707494 | orchestrator | 2026-04-05 00:52:22 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:22.709391 | orchestrator | 2026-04-05 00:52:22 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:22.710781 | orchestrator | 2026-04-05 00:52:22 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:22.710832 | orchestrator | 2026-04-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:25.750137 | orchestrator | 2026-04-05 00:52:25 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:25.751077 | orchestrator | 2026-04-05 00:52:25 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:25.752743 | orchestrator | 2026-04-05 00:52:25 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:25.752789 | orchestrator | 2026-04-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:28.797065 | orchestrator | 2026-04-05 00:52:28 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:28.797138 | orchestrator | 2026-04-05 00:52:28 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:28.798396 | orchestrator | 2026-04-05 00:52:28 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:28.798564 | orchestrator | 2026-04-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:31.842135 | orchestrator | 2026-04-05 00:52:31 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:31.842322 | orchestrator | 2026-04-05 00:52:31 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:31.845482 | orchestrator | 2026-04-05 00:52:31 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:31.845633 | orchestrator | 2026-04-05 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:34.884173 | orchestrator | 2026-04-05 00:52:34 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:34.886778 | orchestrator | 2026-04-05 00:52:34 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:34.889786 | orchestrator | 2026-04-05 00:52:34 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:34.889849 | orchestrator | 2026-04-05 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:37.942831 | orchestrator | 2026-04-05 00:52:37 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:37.942904 | orchestrator | 2026-04-05 00:52:37 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:37.944359 | orchestrator | 2026-04-05 00:52:37 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:37.944393 | orchestrator | 2026-04-05 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:40.990218 | orchestrator | 2026-04-05 00:52:40 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:40.993764 | orchestrator | 2026-04-05 00:52:40 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:40.994833 | orchestrator | 2026-04-05 00:52:40 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:40.995035 | orchestrator | 2026-04-05 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:44.030724 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:44.032144 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:44.034229 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:44.034269 | orchestrator | 2026-04-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:47.076030 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:47.080616 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:47.080707 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:47.080722 | orchestrator | 2026-04-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:50.117475 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:50.118125 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:50.119160 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:50.119179 | orchestrator | 2026-04-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:53.162194 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:53.162287 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:53.162297 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:53.162305 | orchestrator | 2026-04-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:56.214299 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:56.217772 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:56.220911 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:56.221005 | orchestrator | 2026-04-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:59.268807 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:52:59.275442 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:52:59.276666 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:52:59.277598 | orchestrator | 2026-04-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:02.365584 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:02.370334 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:02.373516 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:02.373643 | orchestrator | 2026-04-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:05.432994 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:05.433594 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:05.435010 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:05.435061 | orchestrator | 2026-04-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:08.481925 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:08.483205 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:08.485234 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:08.485372 | orchestrator | 2026-04-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:11.532148 | orchestrator | 2026-04-05 00:53:11 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:11.532249 | orchestrator | 2026-04-05 00:53:11 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:11.532861 | orchestrator | 2026-04-05 00:53:11 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:11.532932 | orchestrator | 2026-04-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:14.564008 | orchestrator | 2026-04-05 00:53:14 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:14.564225 | orchestrator | 2026-04-05 00:53:14 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:14.565237 | orchestrator | 2026-04-05 00:53:14 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:14.565284 | orchestrator | 2026-04-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:17.614298 | orchestrator | 2026-04-05 00:53:17 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:17.615736 | orchestrator | 2026-04-05 00:53:17 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:17.617662 | orchestrator | 2026-04-05 00:53:17 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:17.617718 | orchestrator | 2026-04-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:20.662674 | orchestrator | 2026-04-05 00:53:20 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:20.665235 | orchestrator | 2026-04-05 00:53:20 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:20.666176 | orchestrator | 2026-04-05 00:53:20 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:20.666246 | orchestrator | 2026-04-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:23.801027 | orchestrator | 2026-04-05 00:53:23 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:23.801965 | orchestrator | 2026-04-05 00:53:23 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:23.803123 | orchestrator | 2026-04-05 00:53:23 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:23.803373 | orchestrator | 2026-04-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:27.080326 | orchestrator | 2026-04-05 00:53:26 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:27.080511 | orchestrator | 2026-04-05 00:53:26 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:27.080540 | orchestrator | 2026-04-05 00:53:26 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:27.080562 | orchestrator | 2026-04-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:30.165055 | orchestrator | 2026-04-05 00:53:30 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:30.167791 | orchestrator | 2026-04-05 00:53:30 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:30.170219 | orchestrator | 2026-04-05 00:53:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:30.170296 | orchestrator | 2026-04-05 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:33.222939 | orchestrator | 2026-04-05 00:53:33 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:33.223913 | orchestrator | 2026-04-05 00:53:33 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:33.224728 | orchestrator | 2026-04-05 00:53:33 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:33.224987 | orchestrator | 2026-04-05 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:36.269464 | orchestrator | 2026-04-05 00:53:36 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:36.273713 | orchestrator | 2026-04-05 00:53:36 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state STARTED 2026-04-05 00:53:36.275642 | orchestrator | 2026-04-05 00:53:36 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:36.275846 | orchestrator | 2026-04-05 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:39.327529 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:39.331520 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task a0d0cd36-cb56-4c88-8b6b-fb359ac5d6e1 is in state SUCCESS 2026-04-05 00:53:39.334530 | orchestrator | 2026-04-05 00:53:39.334607 | orchestrator | 2026-04-05 00:53:39.334616 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-05 00:53:39.334624 | orchestrator | 2026-04-05 00:53:39.334631 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-05 00:53:39.334638 | orchestrator | Sunday 05 April 2026 00:48:31 +0000 (0:00:00.399) 0:00:00.399 ********** 2026-04-05 00:53:39.334644 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.334651 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.334673 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.334679 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.334685 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.334691 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.334696 | orchestrator | 2026-04-05 00:53:39.334702 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-05 00:53:39.334708 | orchestrator | Sunday 05 April 2026 00:48:31 +0000 (0:00:00.727) 0:00:01.127 ********** 2026-04-05 00:53:39.334714 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.334721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.334726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.334732 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.334738 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.334743 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.334749 | orchestrator | 2026-04-05 00:53:39.334755 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-05 00:53:39.334761 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:00.781) 0:00:01.908 ********** 2026-04-05 00:53:39.334767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.334773 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.334778 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.334784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.334790 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.334800 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.334806 | orchestrator | 2026-04-05 00:53:39.334812 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-05 00:53:39.334818 | orchestrator | Sunday 05 April 2026 00:48:33 +0000 (0:00:00.654) 0:00:02.563 ********** 2026-04-05 00:53:39.334823 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.334829 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.334835 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.334840 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.334846 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.334852 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.334857 | orchestrator | 2026-04-05 00:53:39.334863 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-05 00:53:39.334869 | orchestrator | Sunday 05 April 2026 00:48:35 +0000 (0:00:02.590) 0:00:05.154 ********** 2026-04-05 00:53:39.334874 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.334880 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.334885 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.334891 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.334897 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.334903 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.334909 | orchestrator | 2026-04-05 00:53:39.334914 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-05 00:53:39.334920 | orchestrator | Sunday 05 April 2026 00:48:37 +0000 (0:00:01.160) 0:00:06.314 ********** 2026-04-05 00:53:39.334926 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.334931 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.334937 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.334943 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.334948 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.334954 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.334960 | orchestrator | 2026-04-05 00:53:39.334965 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-05 00:53:39.334971 | orchestrator | Sunday 05 April 2026 00:48:38 +0000 (0:00:01.109) 0:00:07.424 ********** 2026-04-05 00:53:39.334977 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.334982 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.334988 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.334994 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.334999 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335009 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335015 | orchestrator | 2026-04-05 00:53:39.335021 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-05 00:53:39.335027 | orchestrator | Sunday 05 April 2026 00:48:39 +0000 (0:00:01.165) 0:00:08.589 ********** 2026-04-05 00:53:39.335032 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335038 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335043 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335049 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335055 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335060 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335066 | orchestrator | 2026-04-05 00:53:39.335072 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-05 00:53:39.335077 | orchestrator | Sunday 05 April 2026 00:48:40 +0000 (0:00:00.990) 0:00:09.580 ********** 2026-04-05 00:53:39.335084 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335089 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335095 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335101 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335107 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335125 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335134 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335143 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335154 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335176 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335186 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335196 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335205 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335214 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335223 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:53:39.335232 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:53:39.335241 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335251 | orchestrator | 2026-04-05 00:53:39.335260 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-05 00:53:39.335270 | orchestrator | Sunday 05 April 2026 00:48:41 +0000 (0:00:01.390) 0:00:10.971 ********** 2026-04-05 00:53:39.335280 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335291 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335301 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335312 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335320 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335327 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335334 | orchestrator | 2026-04-05 00:53:39.335341 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-05 00:53:39.335349 | orchestrator | Sunday 05 April 2026 00:48:43 +0000 (0:00:01.869) 0:00:12.840 ********** 2026-04-05 00:53:39.335356 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.335363 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.335370 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.335376 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.335407 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.335414 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.335420 | orchestrator | 2026-04-05 00:53:39.335426 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-05 00:53:39.335438 | orchestrator | Sunday 05 April 2026 00:48:44 +0000 (0:00:01.202) 0:00:14.043 ********** 2026-04-05 00:53:39.335443 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.335449 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.335455 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.335461 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.335467 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.335472 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.335478 | orchestrator | 2026-04-05 00:53:39.335484 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-05 00:53:39.335490 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:08.466) 0:00:22.510 ********** 2026-04-05 00:53:39.335496 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335501 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335507 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335513 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335530 | orchestrator | 2026-04-05 00:53:39.335536 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-05 00:53:39.335542 | orchestrator | Sunday 05 April 2026 00:48:55 +0000 (0:00:01.874) 0:00:24.384 ********** 2026-04-05 00:53:39.335548 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335553 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335559 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335571 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335576 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335582 | orchestrator | 2026-04-05 00:53:39.335588 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-05 00:53:39.335596 | orchestrator | Sunday 05 April 2026 00:48:57 +0000 (0:00:02.647) 0:00:27.031 ********** 2026-04-05 00:53:39.335602 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335607 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335613 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335619 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335628 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335637 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335645 | orchestrator | 2026-04-05 00:53:39.335653 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-05 00:53:39.335662 | orchestrator | Sunday 05 April 2026 00:48:59 +0000 (0:00:02.055) 0:00:29.087 ********** 2026-04-05 00:53:39.335671 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-05 00:53:39.335679 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-05 00:53:39.335690 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335700 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-05 00:53:39.335708 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-05 00:53:39.335717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335726 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-05 00:53:39.335735 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-05 00:53:39.335743 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335753 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-05 00:53:39.335762 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-05 00:53:39.335771 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335780 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-05 00:53:39.335791 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-05 00:53:39.335800 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335808 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-05 00:53:39.335825 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-05 00:53:39.335837 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335847 | orchestrator | 2026-04-05 00:53:39.335858 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-05 00:53:39.335877 | orchestrator | Sunday 05 April 2026 00:49:02 +0000 (0:00:02.967) 0:00:32.055 ********** 2026-04-05 00:53:39.335888 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335897 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335908 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335919 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335929 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335938 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.335944 | orchestrator | 2026-04-05 00:53:39.335951 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-05 00:53:39.335957 | orchestrator | Sunday 05 April 2026 00:49:04 +0000 (0:00:01.747) 0:00:33.803 ********** 2026-04-05 00:53:39.335964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.335970 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.335976 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.335982 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.335989 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.335995 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336001 | orchestrator | 2026-04-05 00:53:39.336009 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-05 00:53:39.336020 | orchestrator | 2026-04-05 00:53:39.336030 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-05 00:53:39.336040 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:02.202) 0:00:36.005 ********** 2026-04-05 00:53:39.336050 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336061 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336072 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336082 | orchestrator | 2026-04-05 00:53:39.336094 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-05 00:53:39.336104 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:01.517) 0:00:37.523 ********** 2026-04-05 00:53:39.336121 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336132 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336139 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336147 | orchestrator | 2026-04-05 00:53:39.336158 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-05 00:53:39.336168 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:01.485) 0:00:39.008 ********** 2026-04-05 00:53:39.336179 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336188 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336199 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336210 | orchestrator | 2026-04-05 00:53:39.336220 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-05 00:53:39.336231 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:01.096) 0:00:40.105 ********** 2026-04-05 00:53:39.336242 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336252 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336258 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336264 | orchestrator | 2026-04-05 00:53:39.336271 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-05 00:53:39.336277 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:01.403) 0:00:41.508 ********** 2026-04-05 00:53:39.336283 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.336289 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336296 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336302 | orchestrator | 2026-04-05 00:53:39.336308 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-05 00:53:39.336314 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:00.323) 0:00:41.832 ********** 2026-04-05 00:53:39.336327 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336333 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.336339 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.336345 | orchestrator | 2026-04-05 00:53:39.336351 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-05 00:53:39.336358 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:01.007) 0:00:42.841 ********** 2026-04-05 00:53:39.336364 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.336370 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.336376 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336405 | orchestrator | 2026-04-05 00:53:39.336413 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-05 00:53:39.336420 | orchestrator | Sunday 05 April 2026 00:49:15 +0000 (0:00:01.894) 0:00:44.736 ********** 2026-04-05 00:53:39.336426 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:39.336432 | orchestrator | 2026-04-05 00:53:39.336438 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-05 00:53:39.336444 | orchestrator | Sunday 05 April 2026 00:49:16 +0000 (0:00:00.980) 0:00:45.716 ********** 2026-04-05 00:53:39.336450 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336457 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336463 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336469 | orchestrator | 2026-04-05 00:53:39.336475 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-05 00:53:39.336481 | orchestrator | Sunday 05 April 2026 00:49:20 +0000 (0:00:03.522) 0:00:49.238 ********** 2026-04-05 00:53:39.336487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336500 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336506 | orchestrator | 2026-04-05 00:53:39.336512 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-05 00:53:39.336518 | orchestrator | Sunday 05 April 2026 00:49:20 +0000 (0:00:00.846) 0:00:50.085 ********** 2026-04-05 00:53:39.336524 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336530 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336536 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336543 | orchestrator | 2026-04-05 00:53:39.336549 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-05 00:53:39.336555 | orchestrator | Sunday 05 April 2026 00:49:22 +0000 (0:00:02.096) 0:00:52.182 ********** 2026-04-05 00:53:39.336561 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336567 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336573 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336580 | orchestrator | 2026-04-05 00:53:39.336586 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-05 00:53:39.336597 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:02.339) 0:00:54.521 ********** 2026-04-05 00:53:39.336604 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.336610 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336616 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336622 | orchestrator | 2026-04-05 00:53:39.336628 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-05 00:53:39.336635 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:00.721) 0:00:55.243 ********** 2026-04-05 00:53:39.336644 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.336654 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.336662 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.336674 | orchestrator | 2026-04-05 00:53:39.336690 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-05 00:53:39.336700 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:00.695) 0:00:55.938 ********** 2026-04-05 00:53:39.336711 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.336720 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.336739 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.336749 | orchestrator | 2026-04-05 00:53:39.336758 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-05 00:53:39.336768 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:02.744) 0:00:58.683 ********** 2026-04-05 00:53:39.336779 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336791 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336801 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336811 | orchestrator | 2026-04-05 00:53:39.336822 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-05 00:53:39.336832 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:02.985) 0:01:01.669 ********** 2026-04-05 00:53:39.336842 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.336859 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.336870 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.336880 | orchestrator | 2026-04-05 00:53:39.336891 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-05 00:53:39.336901 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:00.990) 0:01:02.659 ********** 2026-04-05 00:53:39.336912 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:53:39.336923 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:53:39.336934 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:53:39.336944 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:53:39.336955 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:53:39.336965 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:53:39.336976 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:53:39.336986 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:53:39.336997 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:53:39.337008 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:53:39.337018 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:53:39.337029 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:53:39.337039 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:53:39.337049 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:53:39.337060 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 00:53:39.337070 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.337080 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.337090 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.337101 | orchestrator | 2026-04-05 00:53:39.337112 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-05 00:53:39.337131 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:54.691) 0:01:57.350 ********** 2026-04-05 00:53:39.337141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.337151 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.337162 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.337173 | orchestrator | 2026-04-05 00:53:39.337183 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-05 00:53:39.337201 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:00.600) 0:01:57.950 ********** 2026-04-05 00:53:39.337212 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337222 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337233 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337243 | orchestrator | 2026-04-05 00:53:39.337253 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-05 00:53:39.337264 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:01.463) 0:01:59.414 ********** 2026-04-05 00:53:39.337275 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337286 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337296 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337307 | orchestrator | 2026-04-05 00:53:39.337317 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-05 00:53:39.337328 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:01.494) 0:02:00.909 ********** 2026-04-05 00:53:39.337338 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337348 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337358 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337368 | orchestrator | 2026-04-05 00:53:39.337379 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-05 00:53:39.337434 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:26.588) 0:02:27.497 ********** 2026-04-05 00:53:39.337445 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.337456 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.337466 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.337477 | orchestrator | 2026-04-05 00:53:39.337488 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-05 00:53:39.337499 | orchestrator | Sunday 05 April 2026 00:50:59 +0000 (0:00:00.926) 0:02:28.424 ********** 2026-04-05 00:53:39.337509 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.337519 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.337535 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.337546 | orchestrator | 2026-04-05 00:53:39.337556 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-05 00:53:39.337567 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:01.249) 0:02:29.673 ********** 2026-04-05 00:53:39.337578 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337588 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337599 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337609 | orchestrator | 2026-04-05 00:53:39.337619 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-05 00:53:39.337629 | orchestrator | Sunday 05 April 2026 00:51:01 +0000 (0:00:00.786) 0:02:30.459 ********** 2026-04-05 00:53:39.337640 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.337650 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.337661 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.337671 | orchestrator | 2026-04-05 00:53:39.337682 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-05 00:53:39.337693 | orchestrator | Sunday 05 April 2026 00:51:02 +0000 (0:00:01.000) 0:02:31.460 ********** 2026-04-05 00:53:39.337703 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.337713 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.337724 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.337734 | orchestrator | 2026-04-05 00:53:39.337745 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-05 00:53:39.337755 | orchestrator | Sunday 05 April 2026 00:51:02 +0000 (0:00:00.607) 0:02:32.068 ********** 2026-04-05 00:53:39.337774 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337784 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337795 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337805 | orchestrator | 2026-04-05 00:53:39.337816 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-05 00:53:39.337826 | orchestrator | Sunday 05 April 2026 00:51:04 +0000 (0:00:01.194) 0:02:33.262 ********** 2026-04-05 00:53:39.337837 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337848 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337858 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337868 | orchestrator | 2026-04-05 00:53:39.337879 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-05 00:53:39.337889 | orchestrator | Sunday 05 April 2026 00:51:04 +0000 (0:00:00.905) 0:02:34.168 ********** 2026-04-05 00:53:39.337900 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337910 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337921 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337932 | orchestrator | 2026-04-05 00:53:39.337942 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-05 00:53:39.337953 | orchestrator | Sunday 05 April 2026 00:51:06 +0000 (0:00:01.116) 0:02:35.284 ********** 2026-04-05 00:53:39.337964 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:39.337974 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:39.337984 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:39.337995 | orchestrator | 2026-04-05 00:53:39.338005 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-05 00:53:39.338082 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:01.108) 0:02:36.393 ********** 2026-04-05 00:53:39.338098 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.338109 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.338119 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.338129 | orchestrator | 2026-04-05 00:53:39.338139 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-05 00:53:39.338149 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:00.426) 0:02:36.820 ********** 2026-04-05 00:53:39.338159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.338223 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.338234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.338245 | orchestrator | 2026-04-05 00:53:39.338256 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-05 00:53:39.338267 | orchestrator | Sunday 05 April 2026 00:51:08 +0000 (0:00:00.641) 0:02:37.461 ********** 2026-04-05 00:53:39.338278 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.338289 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.338299 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.338310 | orchestrator | 2026-04-05 00:53:39.338321 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-05 00:53:39.338331 | orchestrator | Sunday 05 April 2026 00:51:09 +0000 (0:00:00.778) 0:02:38.240 ********** 2026-04-05 00:53:39.338342 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.338363 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.338374 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.338405 | orchestrator | 2026-04-05 00:53:39.338417 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-05 00:53:39.338429 | orchestrator | Sunday 05 April 2026 00:51:09 +0000 (0:00:00.802) 0:02:39.042 ********** 2026-04-05 00:53:39.338439 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:53:39.338449 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:53:39.338460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:53:39.338471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:53:39.338491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:53:39.338499 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:53:39.338506 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:53:39.338512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:53:39.338518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:53:39.338533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-05 00:53:39.338539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:53:39.338545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:53:39.338551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-05 00:53:39.338558 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:53:39.338564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:53:39.338570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:53:39.338576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:53:39.338582 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:53:39.338588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:53:39.338594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:53:39.338600 | orchestrator | 2026-04-05 00:53:39.338606 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-05 00:53:39.338613 | orchestrator | 2026-04-05 00:53:39.338619 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-05 00:53:39.338625 | orchestrator | Sunday 05 April 2026 00:51:13 +0000 (0:00:03.825) 0:02:42.867 ********** 2026-04-05 00:53:39.338631 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.338638 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.338644 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.338650 | orchestrator | 2026-04-05 00:53:39.338656 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-05 00:53:39.338662 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:00.390) 0:02:43.258 ********** 2026-04-05 00:53:39.338668 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.338674 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.338681 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.338687 | orchestrator | 2026-04-05 00:53:39.338693 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-05 00:53:39.338699 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:00.772) 0:02:44.031 ********** 2026-04-05 00:53:39.338705 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.338711 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.338717 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.338723 | orchestrator | 2026-04-05 00:53:39.338729 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-05 00:53:39.338735 | orchestrator | Sunday 05 April 2026 00:51:15 +0000 (0:00:00.512) 0:02:44.543 ********** 2026-04-05 00:53:39.338742 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:53:39.338748 | orchestrator | 2026-04-05 00:53:39.338754 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-05 00:53:39.338767 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.837) 0:02:45.380 ********** 2026-04-05 00:53:39.338773 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.338779 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.338785 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.338801 | orchestrator | 2026-04-05 00:53:39.338814 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-05 00:53:39.338821 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.357) 0:02:45.737 ********** 2026-04-05 00:53:39.338827 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.338833 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.338839 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.338846 | orchestrator | 2026-04-05 00:53:39.338852 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-05 00:53:39.338863 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.418) 0:02:46.156 ********** 2026-04-05 00:53:39.338869 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.338876 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.338882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.338888 | orchestrator | 2026-04-05 00:53:39.338894 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-05 00:53:39.338900 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:00.630) 0:02:46.787 ********** 2026-04-05 00:53:39.338906 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.338912 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.338918 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.338924 | orchestrator | 2026-04-05 00:53:39.338930 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-05 00:53:39.338936 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.788) 0:02:47.575 ********** 2026-04-05 00:53:39.338942 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.338949 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.338954 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.338960 | orchestrator | 2026-04-05 00:53:39.338967 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-05 00:53:39.338973 | orchestrator | Sunday 05 April 2026 00:51:19 +0000 (0:00:01.256) 0:02:48.832 ********** 2026-04-05 00:53:39.338979 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.338985 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.338991 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.338997 | orchestrator | 2026-04-05 00:53:39.339004 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-05 00:53:39.339010 | orchestrator | Sunday 05 April 2026 00:51:20 +0000 (0:00:01.363) 0:02:50.195 ********** 2026-04-05 00:53:39.339016 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:39.339025 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:39.339032 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:39.339038 | orchestrator | 2026-04-05 00:53:39.339044 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:53:39.339050 | orchestrator | 2026-04-05 00:53:39.339056 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:53:39.339062 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:11.240) 0:03:01.436 ********** 2026-04-05 00:53:39.339068 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339074 | orchestrator | 2026-04-05 00:53:39.339081 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:53:39.339087 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:00.778) 0:03:02.214 ********** 2026-04-05 00:53:39.339093 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339099 | orchestrator | 2026-04-05 00:53:39.339105 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:53:39.339111 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:00.403) 0:03:02.618 ********** 2026-04-05 00:53:39.339117 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:53:39.339129 | orchestrator | 2026-04-05 00:53:39.339136 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:53:39.339142 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:00.545) 0:03:03.163 ********** 2026-04-05 00:53:39.339148 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339155 | orchestrator | 2026-04-05 00:53:39.339161 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:53:39.339167 | orchestrator | Sunday 05 April 2026 00:51:34 +0000 (0:00:00.769) 0:03:03.932 ********** 2026-04-05 00:53:39.339173 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339180 | orchestrator | 2026-04-05 00:53:39.339186 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:53:39.339192 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:00.630) 0:03:04.563 ********** 2026-04-05 00:53:39.339198 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:53:39.339204 | orchestrator | 2026-04-05 00:53:39.339210 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:53:39.339216 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:02.304) 0:03:06.867 ********** 2026-04-05 00:53:39.339222 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:53:39.339228 | orchestrator | 2026-04-05 00:53:39.339235 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:53:39.339241 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:00.983) 0:03:07.851 ********** 2026-04-05 00:53:39.339247 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339253 | orchestrator | 2026-04-05 00:53:39.339259 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:53:39.339265 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.450) 0:03:08.302 ********** 2026-04-05 00:53:39.339271 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339277 | orchestrator | 2026-04-05 00:53:39.339283 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-05 00:53:39.339289 | orchestrator | 2026-04-05 00:53:39.339295 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-05 00:53:39.339302 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.504) 0:03:08.806 ********** 2026-04-05 00:53:39.339308 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339314 | orchestrator | 2026-04-05 00:53:39.339320 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-05 00:53:39.339326 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.169) 0:03:08.975 ********** 2026-04-05 00:53:39.339333 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:53:39.339339 | orchestrator | 2026-04-05 00:53:39.339345 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-05 00:53:39.339351 | orchestrator | Sunday 05 April 2026 00:51:40 +0000 (0:00:00.333) 0:03:09.308 ********** 2026-04-05 00:53:39.339357 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339364 | orchestrator | 2026-04-05 00:53:39.339370 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-05 00:53:39.339376 | orchestrator | Sunday 05 April 2026 00:51:41 +0000 (0:00:00.921) 0:03:10.230 ********** 2026-04-05 00:53:39.339406 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339415 | orchestrator | 2026-04-05 00:53:39.339422 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-05 00:53:39.339428 | orchestrator | Sunday 05 April 2026 00:51:42 +0000 (0:00:01.899) 0:03:12.129 ********** 2026-04-05 00:53:39.339434 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339440 | orchestrator | 2026-04-05 00:53:39.339446 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-05 00:53:39.339452 | orchestrator | Sunday 05 April 2026 00:51:43 +0000 (0:00:01.004) 0:03:13.134 ********** 2026-04-05 00:53:39.339459 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339465 | orchestrator | 2026-04-05 00:53:39.339476 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-05 00:53:39.339483 | orchestrator | Sunday 05 April 2026 00:51:44 +0000 (0:00:00.466) 0:03:13.600 ********** 2026-04-05 00:53:39.339489 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339495 | orchestrator | 2026-04-05 00:53:39.339502 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-05 00:53:39.339508 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:08.125) 0:03:21.726 ********** 2026-04-05 00:53:39.339514 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.339520 | orchestrator | 2026-04-05 00:53:39.339527 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-05 00:53:39.339533 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:15.505) 0:03:37.232 ********** 2026-04-05 00:53:39.339539 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.339546 | orchestrator | 2026-04-05 00:53:39.339552 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-05 00:53:39.339558 | orchestrator | 2026-04-05 00:53:39.339564 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-05 00:53:39.339574 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.547) 0:03:37.780 ********** 2026-04-05 00:53:39.339581 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.339587 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.339594 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.339600 | orchestrator | 2026-04-05 00:53:39.339606 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-05 00:53:39.339613 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.428) 0:03:38.208 ********** 2026-04-05 00:53:39.339619 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339625 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.339631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.339637 | orchestrator | 2026-04-05 00:53:39.339644 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-05 00:53:39.339650 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:00.637) 0:03:38.845 ********** 2026-04-05 00:53:39.339657 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:39.339663 | orchestrator | 2026-04-05 00:53:39.339669 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-05 00:53:39.339676 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:00.649) 0:03:39.495 ********** 2026-04-05 00:53:39.339682 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.339688 | orchestrator | 2026-04-05 00:53:39.339695 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-05 00:53:39.339701 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:01.128) 0:03:40.623 ********** 2026-04-05 00:53:39.339707 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.339713 | orchestrator | 2026-04-05 00:53:39.339720 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-05 00:53:39.339726 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:01.030) 0:03:41.654 ********** 2026-04-05 00:53:39.339732 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339739 | orchestrator | 2026-04-05 00:53:39.339745 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-05 00:53:39.339751 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:00.156) 0:03:41.810 ********** 2026-04-05 00:53:39.339767 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.339774 | orchestrator | 2026-04-05 00:53:39.339780 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-05 00:53:39.339786 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:01.140) 0:03:42.951 ********** 2026-04-05 00:53:39.339793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339799 | orchestrator | 2026-04-05 00:53:39.339805 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-05 00:53:39.339817 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.132) 0:03:43.084 ********** 2026-04-05 00:53:39.339823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339829 | orchestrator | 2026-04-05 00:53:39.339835 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-05 00:53:39.339842 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:00.352) 0:03:43.437 ********** 2026-04-05 00:53:39.339848 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339854 | orchestrator | 2026-04-05 00:53:39.339860 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-05 00:53:39.339866 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:00.131) 0:03:43.568 ********** 2026-04-05 00:53:39.339873 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.339879 | orchestrator | 2026-04-05 00:53:39.339885 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-05 00:53:39.339891 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:00.192) 0:03:43.760 ********** 2026-04-05 00:53:39.339897 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.339903 | orchestrator | 2026-04-05 00:53:39.339910 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-05 00:53:39.339916 | orchestrator | Sunday 05 April 2026 00:52:20 +0000 (0:00:05.538) 0:03:49.298 ********** 2026-04-05 00:53:39.339922 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-05 00:53:39.339933 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-05 00:53:39.339939 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-05 00:53:39.339946 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-05 00:53:39.339952 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-05 00:53:39.339958 | orchestrator | 2026-04-05 00:53:39.339965 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-05 00:53:39.339971 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:43.467) 0:04:32.766 ********** 2026-04-05 00:53:39.339977 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.339983 | orchestrator | 2026-04-05 00:53:39.339989 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-05 00:53:39.339996 | orchestrator | Sunday 05 April 2026 00:53:05 +0000 (0:00:02.136) 0:04:34.903 ********** 2026-04-05 00:53:39.340002 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.340008 | orchestrator | 2026-04-05 00:53:39.340014 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-05 00:53:39.340020 | orchestrator | Sunday 05 April 2026 00:53:07 +0000 (0:00:01.870) 0:04:36.773 ********** 2026-04-05 00:53:39.340026 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:53:39.340032 | orchestrator | 2026-04-05 00:53:39.340038 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-05 00:53:39.340045 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:01.205) 0:04:37.979 ********** 2026-04-05 00:53:39.340051 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.340057 | orchestrator | 2026-04-05 00:53:39.340063 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-05 00:53:39.340070 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.153) 0:04:38.132 ********** 2026-04-05 00:53:39.340076 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-05 00:53:39.340083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-05 00:53:39.340089 | orchestrator | 2026-04-05 00:53:39.340095 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-05 00:53:39.340102 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:02.284) 0:04:40.417 ********** 2026-04-05 00:53:39.340108 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.340115 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.340126 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.340132 | orchestrator | 2026-04-05 00:53:39.340161 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-05 00:53:39.340168 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:00.557) 0:04:40.975 ********** 2026-04-05 00:53:39.340174 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.340181 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.340187 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.340193 | orchestrator | 2026-04-05 00:53:39.340199 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-05 00:53:39.340206 | orchestrator | 2026-04-05 00:53:39.340212 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-05 00:53:39.340218 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:00.962) 0:04:41.937 ********** 2026-04-05 00:53:39.340224 | orchestrator | ok: [testbed-manager] 2026-04-05 00:53:39.340230 | orchestrator | 2026-04-05 00:53:39.340240 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-05 00:53:39.340251 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:00.168) 0:04:42.105 ********** 2026-04-05 00:53:39.340264 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:53:39.340279 | orchestrator | 2026-04-05 00:53:39.340290 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-05 00:53:39.340300 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.297) 0:04:42.403 ********** 2026-04-05 00:53:39.340310 | orchestrator | changed: [testbed-manager] 2026-04-05 00:53:39.340321 | orchestrator | 2026-04-05 00:53:39.340332 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-05 00:53:39.340343 | orchestrator | 2026-04-05 00:53:39.340354 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-05 00:53:39.340365 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:06.446) 0:04:48.850 ********** 2026-04-05 00:53:39.340372 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:39.340379 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:39.340426 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:39.340433 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:39.340439 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:39.340446 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:39.340452 | orchestrator | 2026-04-05 00:53:39.340458 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-05 00:53:39.340464 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:00.681) 0:04:49.532 ********** 2026-04-05 00:53:39.340474 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:53:39.340485 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:53:39.340548 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:53:39.340562 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:53:39.340572 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:53:39.340582 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:53:39.340593 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:53:39.340603 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:53:39.340625 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:53:39.340635 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:53:39.340647 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:53:39.340655 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:53:39.340670 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:53:39.340677 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:53:39.340687 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:53:39.340697 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:53:39.340708 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:53:39.340718 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:53:39.340728 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:53:39.340738 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:53:39.340749 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:53:39.340766 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:53:39.340777 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:53:39.340787 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:53:39.340799 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:53:39.340805 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:53:39.340811 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:53:39.340817 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:53:39.340823 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:53:39.340830 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:53:39.340836 | orchestrator | 2026-04-05 00:53:39.340842 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-05 00:53:39.340848 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:15.808) 0:05:05.340 ********** 2026-04-05 00:53:39.340854 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.340861 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.340867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.340873 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.340880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.340886 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.340892 | orchestrator | 2026-04-05 00:53:39.340898 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-05 00:53:39.340905 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:00.931) 0:05:06.271 ********** 2026-04-05 00:53:39.340911 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:53:39.340917 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:53:39.340923 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:53:39.340930 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:39.340936 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:39.340942 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:39.340949 | orchestrator | 2026-04-05 00:53:39.340955 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:53:39.340961 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:53:39.340970 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 00:53:39.340977 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:53:39.340989 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:53:39.340995 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:53:39.341002 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:53:39.341008 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:53:39.341014 | orchestrator | 2026-04-05 00:53:39.341021 | orchestrator | 2026-04-05 00:53:39.341027 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:53:39.341040 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:00.507) 0:05:06.779 ********** 2026-04-05 00:53:39.341047 | orchestrator | =============================================================================== 2026-04-05 00:53:39.341055 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.69s 2026-04-05 00:53:39.341062 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.47s 2026-04-05 00:53:39.341070 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.59s 2026-04-05 00:53:39.341077 | orchestrator | Manage labels ---------------------------------------------------------- 15.81s 2026-04-05 00:53:39.341084 | orchestrator | kubectl : Install required packages ------------------------------------ 15.51s 2026-04-05 00:53:39.341091 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.24s 2026-04-05 00:53:39.341099 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 8.47s 2026-04-05 00:53:39.341106 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.13s 2026-04-05 00:53:39.341113 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.45s 2026-04-05 00:53:39.341120 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.54s 2026-04-05 00:53:39.341127 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.83s 2026-04-05 00:53:39.341134 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.52s 2026-04-05 00:53:39.341146 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.99s 2026-04-05 00:53:39.341153 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.97s 2026-04-05 00:53:39.341160 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.74s 2026-04-05 00:53:39.341168 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.65s 2026-04-05 00:53:39.341175 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.59s 2026-04-05 00:53:39.341182 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.34s 2026-04-05 00:53:39.341189 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.30s 2026-04-05 00:53:39.341196 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.28s 2026-04-05 00:53:39.341203 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task 2a276260-ee3f-4f88-8220-b49b9a198866 is in state STARTED 2026-04-05 00:53:39.341211 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:39.341309 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task 0e3e4ca9-b94d-4bf6-9157-7a4a7fcfb64f is in state STARTED 2026-04-05 00:53:39.341770 | orchestrator | 2026-04-05 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:42.404793 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:42.405804 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task 2a276260-ee3f-4f88-8220-b49b9a198866 is in state STARTED 2026-04-05 00:53:42.407028 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:42.408233 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task 0e3e4ca9-b94d-4bf6-9157-7a4a7fcfb64f is in state STARTED 2026-04-05 00:53:42.408354 | orchestrator | 2026-04-05 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:45.459774 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:45.459832 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task 2a276260-ee3f-4f88-8220-b49b9a198866 is in state STARTED 2026-04-05 00:53:45.461001 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:45.462397 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task 0e3e4ca9-b94d-4bf6-9157-7a4a7fcfb64f is in state STARTED 2026-04-05 00:53:45.463035 | orchestrator | 2026-04-05 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:48.512729 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:48.514597 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task 2a276260-ee3f-4f88-8220-b49b9a198866 is in state SUCCESS 2026-04-05 00:53:48.519801 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:48.525855 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task 0e3e4ca9-b94d-4bf6-9157-7a4a7fcfb64f is in state STARTED 2026-04-05 00:53:48.526011 | orchestrator | 2026-04-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:51.573979 | orchestrator | 2026-04-05 00:53:51 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:51.576783 | orchestrator | 2026-04-05 00:53:51 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:51.580607 | orchestrator | 2026-04-05 00:53:51 | INFO  | Task 0e3e4ca9-b94d-4bf6-9157-7a4a7fcfb64f is in state SUCCESS 2026-04-05 00:53:51.580673 | orchestrator | 2026-04-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:54.626112 | orchestrator | 2026-04-05 00:53:54 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:54.627155 | orchestrator | 2026-04-05 00:53:54 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:54.627703 | orchestrator | 2026-04-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:57.693120 | orchestrator | 2026-04-05 00:53:57 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:53:57.697310 | orchestrator | 2026-04-05 00:53:57 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:53:57.697442 | orchestrator | 2026-04-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:00.730792 | orchestrator | 2026-04-05 00:54:00 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:00.733467 | orchestrator | 2026-04-05 00:54:00 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:00.733549 | orchestrator | 2026-04-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:03.774571 | orchestrator | 2026-04-05 00:54:03 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:03.777064 | orchestrator | 2026-04-05 00:54:03 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:03.777111 | orchestrator | 2026-04-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:06.832159 | orchestrator | 2026-04-05 00:54:06 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:06.834589 | orchestrator | 2026-04-05 00:54:06 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:06.834617 | orchestrator | 2026-04-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:09.880328 | orchestrator | 2026-04-05 00:54:09 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:09.882692 | orchestrator | 2026-04-05 00:54:09 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:09.882750 | orchestrator | 2026-04-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:12.929882 | orchestrator | 2026-04-05 00:54:12 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:12.932312 | orchestrator | 2026-04-05 00:54:12 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:12.932456 | orchestrator | 2026-04-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:15.980196 | orchestrator | 2026-04-05 00:54:15 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:15.982391 | orchestrator | 2026-04-05 00:54:15 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:15.982464 | orchestrator | 2026-04-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:19.035045 | orchestrator | 2026-04-05 00:54:19 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:19.037196 | orchestrator | 2026-04-05 00:54:19 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:19.037279 | orchestrator | 2026-04-05 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:22.082652 | orchestrator | 2026-04-05 00:54:22 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:22.085601 | orchestrator | 2026-04-05 00:54:22 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:22.085649 | orchestrator | 2026-04-05 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:25.131935 | orchestrator | 2026-04-05 00:54:25 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:25.135239 | orchestrator | 2026-04-05 00:54:25 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:25.135309 | orchestrator | 2026-04-05 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:28.187440 | orchestrator | 2026-04-05 00:54:28 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:28.188893 | orchestrator | 2026-04-05 00:54:28 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:28.188986 | orchestrator | 2026-04-05 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:31.250706 | orchestrator | 2026-04-05 00:54:31 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:31.251014 | orchestrator | 2026-04-05 00:54:31 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:31.251051 | orchestrator | 2026-04-05 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:34.296099 | orchestrator | 2026-04-05 00:54:34 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:34.298716 | orchestrator | 2026-04-05 00:54:34 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:34.298781 | orchestrator | 2026-04-05 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:37.335510 | orchestrator | 2026-04-05 00:54:37 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:37.336894 | orchestrator | 2026-04-05 00:54:37 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:37.336946 | orchestrator | 2026-04-05 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:40.385128 | orchestrator | 2026-04-05 00:54:40 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:40.386143 | orchestrator | 2026-04-05 00:54:40 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:40.386196 | orchestrator | 2026-04-05 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:43.426090 | orchestrator | 2026-04-05 00:54:43 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:43.426483 | orchestrator | 2026-04-05 00:54:43 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:43.429188 | orchestrator | 2026-04-05 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:46.473835 | orchestrator | 2026-04-05 00:54:46 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:46.473931 | orchestrator | 2026-04-05 00:54:46 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:46.473945 | orchestrator | 2026-04-05 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:49.510227 | orchestrator | 2026-04-05 00:54:49 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:49.511919 | orchestrator | 2026-04-05 00:54:49 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:49.511962 | orchestrator | 2026-04-05 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:52.567700 | orchestrator | 2026-04-05 00:54:52 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:52.569595 | orchestrator | 2026-04-05 00:54:52 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:52.569653 | orchestrator | 2026-04-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:55.607290 | orchestrator | 2026-04-05 00:54:55 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:55.608903 | orchestrator | 2026-04-05 00:54:55 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:55.609173 | orchestrator | 2026-04-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:58.653396 | orchestrator | 2026-04-05 00:54:58 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:54:58.654689 | orchestrator | 2026-04-05 00:54:58 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:54:58.655955 | orchestrator | 2026-04-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:01.700983 | orchestrator | 2026-04-05 00:55:01 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:01.702185 | orchestrator | 2026-04-05 00:55:01 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:01.702292 | orchestrator | 2026-04-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:04.738766 | orchestrator | 2026-04-05 00:55:04 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:04.739028 | orchestrator | 2026-04-05 00:55:04 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:04.739053 | orchestrator | 2026-04-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:07.784082 | orchestrator | 2026-04-05 00:55:07 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:07.784207 | orchestrator | 2026-04-05 00:55:07 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:07.784223 | orchestrator | 2026-04-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:10.818106 | orchestrator | 2026-04-05 00:55:10 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:10.819099 | orchestrator | 2026-04-05 00:55:10 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:10.819215 | orchestrator | 2026-04-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:13.865415 | orchestrator | 2026-04-05 00:55:13 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:13.868797 | orchestrator | 2026-04-05 00:55:13 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:13.869415 | orchestrator | 2026-04-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:16.918102 | orchestrator | 2026-04-05 00:55:16 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:16.918504 | orchestrator | 2026-04-05 00:55:16 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:16.918535 | orchestrator | 2026-04-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:19.963040 | orchestrator | 2026-04-05 00:55:19 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:19.965320 | orchestrator | 2026-04-05 00:55:19 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:19.965414 | orchestrator | 2026-04-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:23.017095 | orchestrator | 2026-04-05 00:55:23 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:23.017189 | orchestrator | 2026-04-05 00:55:23 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:23.017199 | orchestrator | 2026-04-05 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:26.060145 | orchestrator | 2026-04-05 00:55:26 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:26.061599 | orchestrator | 2026-04-05 00:55:26 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:26.061708 | orchestrator | 2026-04-05 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:29.110622 | orchestrator | 2026-04-05 00:55:29 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:29.112842 | orchestrator | 2026-04-05 00:55:29 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:29.113352 | orchestrator | 2026-04-05 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:32.158551 | orchestrator | 2026-04-05 00:55:32 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:32.161366 | orchestrator | 2026-04-05 00:55:32 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:32.161436 | orchestrator | 2026-04-05 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:35.202553 | orchestrator | 2026-04-05 00:55:35 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:35.204609 | orchestrator | 2026-04-05 00:55:35 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:35.204744 | orchestrator | 2026-04-05 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:38.245219 | orchestrator | 2026-04-05 00:55:38 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:38.245564 | orchestrator | 2026-04-05 00:55:38 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:38.245596 | orchestrator | 2026-04-05 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:41.295384 | orchestrator | 2026-04-05 00:55:41 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:41.297397 | orchestrator | 2026-04-05 00:55:41 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:41.297463 | orchestrator | 2026-04-05 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:44.341921 | orchestrator | 2026-04-05 00:55:44 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:44.343531 | orchestrator | 2026-04-05 00:55:44 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:44.343564 | orchestrator | 2026-04-05 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:47.389467 | orchestrator | 2026-04-05 00:55:47 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:47.391106 | orchestrator | 2026-04-05 00:55:47 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:47.391163 | orchestrator | 2026-04-05 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:50.439848 | orchestrator | 2026-04-05 00:55:50 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:50.439941 | orchestrator | 2026-04-05 00:55:50 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:50.439952 | orchestrator | 2026-04-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:53.483319 | orchestrator | 2026-04-05 00:55:53 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:53.483554 | orchestrator | 2026-04-05 00:55:53 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:53.483588 | orchestrator | 2026-04-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:56.531442 | orchestrator | 2026-04-05 00:55:56 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:56.533813 | orchestrator | 2026-04-05 00:55:56 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:56.533946 | orchestrator | 2026-04-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:59.588003 | orchestrator | 2026-04-05 00:55:59 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:55:59.588458 | orchestrator | 2026-04-05 00:55:59 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:55:59.588506 | orchestrator | 2026-04-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:02.635080 | orchestrator | 2026-04-05 00:56:02 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:02.636574 | orchestrator | 2026-04-05 00:56:02 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:02.636898 | orchestrator | 2026-04-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:05.678793 | orchestrator | 2026-04-05 00:56:05 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:05.680050 | orchestrator | 2026-04-05 00:56:05 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:05.680093 | orchestrator | 2026-04-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:08.727780 | orchestrator | 2026-04-05 00:56:08 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:08.728464 | orchestrator | 2026-04-05 00:56:08 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:08.728710 | orchestrator | 2026-04-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:11.762277 | orchestrator | 2026-04-05 00:56:11 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:11.762746 | orchestrator | 2026-04-05 00:56:11 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:11.762833 | orchestrator | 2026-04-05 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:14.810381 | orchestrator | 2026-04-05 00:56:14 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:14.811479 | orchestrator | 2026-04-05 00:56:14 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:14.811519 | orchestrator | 2026-04-05 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:17.859073 | orchestrator | 2026-04-05 00:56:17 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:17.859502 | orchestrator | 2026-04-05 00:56:17 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:17.859594 | orchestrator | 2026-04-05 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:20.915647 | orchestrator | 2026-04-05 00:56:20 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:20.916706 | orchestrator | 2026-04-05 00:56:20 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:20.916871 | orchestrator | 2026-04-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:23.969689 | orchestrator | 2026-04-05 00:56:23 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:23.971099 | orchestrator | 2026-04-05 00:56:23 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:23.971123 | orchestrator | 2026-04-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:27.007762 | orchestrator | 2026-04-05 00:56:27 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:27.008378 | orchestrator | 2026-04-05 00:56:27 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:27.008415 | orchestrator | 2026-04-05 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:30.054726 | orchestrator | 2026-04-05 00:56:30 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:30.055957 | orchestrator | 2026-04-05 00:56:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:30.056115 | orchestrator | 2026-04-05 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:33.100854 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state STARTED 2026-04-05 00:56:33.102326 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:33.102694 | orchestrator | 2026-04-05 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:36.150283 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task fbaae14b-4edc-45a1-aa09-9cb6231781cc is in state SUCCESS 2026-04-05 00:56:36.151356 | orchestrator | 2026-04-05 00:56:36.151391 | orchestrator | 2026-04-05 00:56:36.151405 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-05 00:56:36.151417 | orchestrator | 2026-04-05 00:56:36.151428 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:56:36.151440 | orchestrator | Sunday 05 April 2026 00:53:42 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-04-05 00:56:36.151452 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:56:36.151464 | orchestrator | 2026-04-05 00:56:36.151476 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:56:36.151487 | orchestrator | Sunday 05 April 2026 00:53:43 +0000 (0:00:01.062) 0:00:01.310 ********** 2026-04-05 00:56:36.151499 | orchestrator | changed: [testbed-manager] 2026-04-05 00:56:36.151510 | orchestrator | 2026-04-05 00:56:36.151521 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-05 00:56:36.151533 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:01.369) 0:00:02.679 ********** 2026-04-05 00:56:36.151544 | orchestrator | changed: [testbed-manager] 2026-04-05 00:56:36.151555 | orchestrator | 2026-04-05 00:56:36.151566 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:56:36.151578 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:56:36.151590 | orchestrator | 2026-04-05 00:56:36.151602 | orchestrator | 2026-04-05 00:56:36.151613 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:56:36.151624 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.429) 0:00:03.109 ********** 2026-04-05 00:56:36.151635 | orchestrator | =============================================================================== 2026-04-05 00:56:36.151647 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2026-04-05 00:56:36.151658 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.06s 2026-04-05 00:56:36.151669 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2026-04-05 00:56:36.151680 | orchestrator | 2026-04-05 00:56:36.151691 | orchestrator | 2026-04-05 00:56:36.151702 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:56:36.151713 | orchestrator | 2026-04-05 00:56:36.151761 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:56:36.151774 | orchestrator | Sunday 05 April 2026 00:53:42 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-04-05 00:56:36.151811 | orchestrator | ok: [testbed-manager] 2026-04-05 00:56:36.151823 | orchestrator | 2026-04-05 00:56:36.151834 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:56:36.151846 | orchestrator | Sunday 05 April 2026 00:53:43 +0000 (0:00:00.986) 0:00:01.229 ********** 2026-04-05 00:56:36.151857 | orchestrator | ok: [testbed-manager] 2026-04-05 00:56:36.151868 | orchestrator | 2026-04-05 00:56:36.151879 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:56:36.151890 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:00.541) 0:00:01.770 ********** 2026-04-05 00:56:36.151901 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:56:36.151912 | orchestrator | 2026-04-05 00:56:36.151948 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:56:36.151959 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:01.049) 0:00:02.820 ********** 2026-04-05 00:56:36.151970 | orchestrator | changed: [testbed-manager] 2026-04-05 00:56:36.151983 | orchestrator | 2026-04-05 00:56:36.151996 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:56:36.152008 | orchestrator | Sunday 05 April 2026 00:53:46 +0000 (0:00:01.204) 0:00:04.025 ********** 2026-04-05 00:56:36.152020 | orchestrator | changed: [testbed-manager] 2026-04-05 00:56:36.152033 | orchestrator | 2026-04-05 00:56:36.152045 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:56:36.152057 | orchestrator | Sunday 05 April 2026 00:53:47 +0000 (0:00:00.625) 0:00:04.650 ********** 2026-04-05 00:56:36.152070 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:56:36.152082 | orchestrator | 2026-04-05 00:56:36.152094 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:56:36.152106 | orchestrator | Sunday 05 April 2026 00:53:49 +0000 (0:00:01.817) 0:00:06.468 ********** 2026-04-05 00:56:36.152118 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:56:36.152130 | orchestrator | 2026-04-05 00:56:36.152143 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:56:36.152156 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:01.018) 0:00:07.486 ********** 2026-04-05 00:56:36.152170 | orchestrator | ok: [testbed-manager] 2026-04-05 00:56:36.152210 | orchestrator | 2026-04-05 00:56:36.152229 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:56:36.152248 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:00.480) 0:00:07.966 ********** 2026-04-05 00:56:36.152266 | orchestrator | ok: [testbed-manager] 2026-04-05 00:56:36.152286 | orchestrator | 2026-04-05 00:56:36.152322 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:56:36.152344 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:56:36.152501 | orchestrator | 2026-04-05 00:56:36.152513 | orchestrator | 2026-04-05 00:56:36.152525 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:56:36.152536 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:00.348) 0:00:08.315 ********** 2026-04-05 00:56:36.152546 | orchestrator | =============================================================================== 2026-04-05 00:56:36.152557 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.82s 2026-04-05 00:56:36.152567 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2026-04-05 00:56:36.152578 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.05s 2026-04-05 00:56:36.152604 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.02s 2026-04-05 00:56:36.152615 | orchestrator | Get home directory of operator user ------------------------------------- 0.99s 2026-04-05 00:56:36.152626 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2026-04-05 00:56:36.152637 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2026-04-05 00:56:36.152648 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.48s 2026-04-05 00:56:36.152659 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-04-05 00:56:36.152669 | orchestrator | 2026-04-05 00:56:36.152911 | orchestrator | 2026-04-05 00:56:36.152972 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:56:36.152984 | orchestrator | 2026-04-05 00:56:36.152995 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:56:36.153006 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.824) 0:00:00.824 ********** 2026-04-05 00:56:36.153017 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.153029 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.153053 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.153064 | orchestrator | 2026-04-05 00:56:36.153075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:56:36.153086 | orchestrator | Sunday 05 April 2026 00:50:11 +0000 (0:00:01.172) 0:00:01.997 ********** 2026-04-05 00:56:36.153097 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-05 00:56:36.153107 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-05 00:56:36.153118 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-05 00:56:36.153216 | orchestrator | 2026-04-05 00:56:36.153230 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-05 00:56:36.153241 | orchestrator | 2026-04-05 00:56:36.153252 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:56:36.153263 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:00.715) 0:00:02.713 ********** 2026-04-05 00:56:36.153274 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.153284 | orchestrator | 2026-04-05 00:56:36.153295 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-05 00:56:36.153306 | orchestrator | Sunday 05 April 2026 00:50:14 +0000 (0:00:02.595) 0:00:05.308 ********** 2026-04-05 00:56:36.153317 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.153327 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.153338 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.153349 | orchestrator | 2026-04-05 00:56:36.153360 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 00:56:36.153370 | orchestrator | Sunday 05 April 2026 00:50:18 +0000 (0:00:03.403) 0:00:08.712 ********** 2026-04-05 00:56:36.153381 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.153391 | orchestrator | 2026-04-05 00:56:36.153402 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-05 00:56:36.153413 | orchestrator | Sunday 05 April 2026 00:50:19 +0000 (0:00:00.999) 0:00:09.711 ********** 2026-04-05 00:56:36.153424 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.153436 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.153455 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.153473 | orchestrator | 2026-04-05 00:56:36.153490 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-05 00:56:36.153506 | orchestrator | Sunday 05 April 2026 00:50:20 +0000 (0:00:01.314) 0:00:11.026 ********** 2026-04-05 00:56:36.153522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.153547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.153739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.153783 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.153814 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.153847 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:36.154156 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:36.154212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:36.154232 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:36.154267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:36.154292 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:36.154315 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:36.154353 | orchestrator | 2026-04-05 00:56:36.154370 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:56:36.154388 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:05.428) 0:00:16.455 ********** 2026-04-05 00:56:36.154406 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:56:36.154424 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:56:36.154441 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:56:36.154458 | orchestrator | 2026-04-05 00:56:36.154476 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:56:36.154496 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.231) 0:00:17.686 ********** 2026-04-05 00:56:36.154515 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:56:36.154534 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:56:36.154553 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:56:36.154745 | orchestrator | 2026-04-05 00:56:36.154760 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:56:36.154771 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:02.583) 0:00:20.269 ********** 2026-04-05 00:56:36.154782 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-05 00:56:36.154793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.154829 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-05 00:56:36.154841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.154880 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-05 00:56:36.154892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.154903 | orchestrator | 2026-04-05 00:56:36.154914 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-05 00:56:36.154925 | orchestrator | Sunday 05 April 2026 00:50:32 +0000 (0:00:02.455) 0:00:22.725 ********** 2026-04-05 00:56:36.154940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.154957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.154969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.154981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.155076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.155088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.155099 | orchestrator | 2026-04-05 00:56:36.155110 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-05 00:56:36.155121 | orchestrator | Sunday 05 April 2026 00:50:34 +0000 (0:00:02.519) 0:00:25.244 ********** 2026-04-05 00:56:36.155132 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.155144 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.155155 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.155173 | orchestrator | 2026-04-05 00:56:36.155215 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-05 00:56:36.155227 | orchestrator | Sunday 05 April 2026 00:50:36 +0000 (0:00:02.013) 0:00:27.258 ********** 2026-04-05 00:56:36.155238 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-05 00:56:36.155249 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-05 00:56:36.155260 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-05 00:56:36.155271 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-05 00:56:36.155283 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-05 00:56:36.155294 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-05 00:56:36.155406 | orchestrator | 2026-04-05 00:56:36.155420 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-05 00:56:36.155431 | orchestrator | Sunday 05 April 2026 00:50:39 +0000 (0:00:02.681) 0:00:29.940 ********** 2026-04-05 00:56:36.155442 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.155452 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.155463 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.155474 | orchestrator | 2026-04-05 00:56:36.155485 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-05 00:56:36.155496 | orchestrator | Sunday 05 April 2026 00:50:41 +0000 (0:00:02.318) 0:00:32.258 ********** 2026-04-05 00:56:36.155507 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.155523 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.155534 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.155545 | orchestrator | 2026-04-05 00:56:36.155556 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-05 00:56:36.155567 | orchestrator | Sunday 05 April 2026 00:50:43 +0000 (0:00:02.006) 0:00:34.265 ********** 2026-04-05 00:56:36.155579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.155601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.155613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.155626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.155658 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.155680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.155692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.155709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.155727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.155739 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.155750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.155762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.155780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.155792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.155803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.155814 | orchestrator | 2026-04-05 00:56:36.155825 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-05 00:56:36.155836 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:01.005) 0:00:35.271 ********** 2026-04-05 00:56:36.155896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.155960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.155972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.155994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.156051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.156063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.156094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb', '__omit_place_holder__b4a5a674d1f108fe454b8b5fcd79438c55d10bcb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:36.156130 | orchestrator | 2026-04-05 00:56:36.156142 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-05 00:56:36.156153 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:04.234) 0:00:39.505 ********** 2026-04-05 00:56:36.156170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.156273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.156289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.156301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.156312 | orchestrator | 2026-04-05 00:56:36.156323 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-05 00:56:36.156335 | orchestrator | Sunday 05 April 2026 00:50:53 +0000 (0:00:04.402) 0:00:43.908 ********** 2026-04-05 00:56:36.156346 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:36.156373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:36.156385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:36.156396 | orchestrator | 2026-04-05 00:56:36.156407 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-05 00:56:36.156418 | orchestrator | Sunday 05 April 2026 00:50:55 +0000 (0:00:02.150) 0:00:46.059 ********** 2026-04-05 00:56:36.156429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:36.156440 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:36.156451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:36.156462 | orchestrator | 2026-04-05 00:56:36.156473 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-05 00:56:36.156484 | orchestrator | Sunday 05 April 2026 00:51:01 +0000 (0:00:05.640) 0:00:51.699 ********** 2026-04-05 00:56:36.156495 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.156506 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.156517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.156528 | orchestrator | 2026-04-05 00:56:36.156539 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-05 00:56:36.156550 | orchestrator | Sunday 05 April 2026 00:51:02 +0000 (0:00:01.774) 0:00:53.473 ********** 2026-04-05 00:56:36.156561 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:36.156573 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:36.156584 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:36.156595 | orchestrator | 2026-04-05 00:56:36.156605 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-05 00:56:36.156616 | orchestrator | Sunday 05 April 2026 00:51:08 +0000 (0:00:05.697) 0:00:59.170 ********** 2026-04-05 00:56:36.156627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:36.156638 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:36.156649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:36.156660 | orchestrator | 2026-04-05 00:56:36.156671 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:56:36.156681 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:02.461) 0:01:01.632 ********** 2026-04-05 00:56:36.156692 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.156703 | orchestrator | 2026-04-05 00:56:36.156714 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-05 00:56:36.156725 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:00.657) 0:01:02.290 ********** 2026-04-05 00:56:36.156736 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-05 00:56:36.156747 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-05 00:56:36.156758 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-05 00:56:36.156769 | orchestrator | 2026-04-05 00:56:36.156780 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-05 00:56:36.156791 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:02.807) 0:01:05.097 ********** 2026-04-05 00:56:36.156802 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-05 00:56:36.156824 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-05 00:56:36.156835 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-05 00:56:36.156846 | orchestrator | 2026-04-05 00:56:36.156857 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-05 00:56:36.156868 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:01.979) 0:01:07.076 ********** 2026-04-05 00:56:36.156879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.156890 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.156901 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.156912 | orchestrator | 2026-04-05 00:56:36.156923 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-05 00:56:36.156933 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:00.675) 0:01:07.752 ********** 2026-04-05 00:56:36.156944 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.156955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.156966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.156976 | orchestrator | 2026-04-05 00:56:36.156987 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 00:56:36.156998 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:00.407) 0:01:08.159 ********** 2026-04-05 00:56:36.157016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.157124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.157143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.157156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.157167 | orchestrator | 2026-04-05 00:56:36.157178 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 00:56:36.157239 | orchestrator | Sunday 05 April 2026 00:51:21 +0000 (0:00:04.076) 0:01:12.235 ********** 2026-04-05 00:56:36.157251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157299 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.157310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157351 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.157363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157404 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.157414 | orchestrator | 2026-04-05 00:56:36.157426 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 00:56:36.157435 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.643) 0:01:12.879 ********** 2026-04-05 00:56:36.157451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157524 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.157541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157603 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.157634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.157654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.157682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.157700 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.157716 | orchestrator | 2026-04-05 00:56:36.157732 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-05 00:56:36.157750 | orchestrator | Sunday 05 April 2026 00:51:23 +0000 (0:00:00.994) 0:01:13.873 ********** 2026-04-05 00:56:36.157767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:36.157783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:36.157801 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:36.157817 | orchestrator | 2026-04-05 00:56:36.157831 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-05 00:56:36.157841 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:01.782) 0:01:15.655 ********** 2026-04-05 00:56:36.157850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:36.157869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:36.157879 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:36.157888 | orchestrator | 2026-04-05 00:56:36.157898 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-05 00:56:36.157908 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:01.930) 0:01:17.586 ********** 2026-04-05 00:56:36.157918 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:36.157927 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:36.157937 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:36.157947 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:36.157957 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.157967 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:36.157976 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.157986 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:36.157995 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.158005 | orchestrator | 2026-04-05 00:56:36.158015 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 00:56:36.158083 | orchestrator | Sunday 05 April 2026 00:51:27 +0000 (0:00:00.905) 0:01:18.492 ********** 2026-04-05 00:56:36.158098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.158177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.158220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.158233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.158243 | orchestrator | 2026-04-05 00:56:36.158253 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 00:56:36.158263 | orchestrator | Sunday 05 April 2026 00:51:30 +0000 (0:00:02.374) 0:01:20.866 ********** 2026-04-05 00:56:36.158273 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:56:36.158283 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.158293 | orchestrator | } 2026-04-05 00:56:36.158303 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:56:36.158317 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.158334 | orchestrator | } 2026-04-05 00:56:36.158343 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:56:36.158353 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.158363 | orchestrator | } 2026-04-05 00:56:36.158373 | orchestrator | 2026-04-05 00:56:36.158382 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:56:36.158392 | orchestrator | Sunday 05 April 2026 00:51:30 +0000 (0:00:00.365) 0:01:21.232 ********** 2026-04-05 00:56:36.158403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.158413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.158424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.158434 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.158444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.158459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.158470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.158506 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.158524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.158535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.158545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.158555 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.158565 | orchestrator | 2026-04-05 00:56:36.158575 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-05 00:56:36.158584 | orchestrator | Sunday 05 April 2026 00:51:31 +0000 (0:00:01.275) 0:01:22.507 ********** 2026-04-05 00:56:36.158594 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.158604 | orchestrator | 2026-04-05 00:56:36.158613 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-05 00:56:36.158623 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:00.816) 0:01:23.324 ********** 2026-04-05 00:56:36.158639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.158652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.158674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.158707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.158721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.158744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.158763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158834 | orchestrator | 2026-04-05 00:56:36.158849 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-05 00:56:36.158859 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:03.235) 0:01:26.559 ********** 2026-04-05 00:56:36.158874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.158899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.158910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158930 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.158940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.158951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.158974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.158990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159000 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.159023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.159034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.159044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159070 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.159080 | orchestrator | 2026-04-05 00:56:36.159104 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-05 00:56:36.159114 | orchestrator | Sunday 05 April 2026 00:51:36 +0000 (0:00:00.940) 0:01:27.500 ********** 2026-04-05 00:56:36.159125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159147 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.159156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159231 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.159245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159266 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.159276 | orchestrator | 2026-04-05 00:56:36.159285 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-05 00:56:36.159295 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:01.485) 0:01:28.986 ********** 2026-04-05 00:56:36.159305 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.159315 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.159324 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.159334 | orchestrator | 2026-04-05 00:56:36.159343 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-05 00:56:36.159353 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:01.276) 0:01:30.262 ********** 2026-04-05 00:56:36.159362 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.159372 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.159382 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.159391 | orchestrator | 2026-04-05 00:56:36.159401 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-05 00:56:36.159411 | orchestrator | Sunday 05 April 2026 00:51:41 +0000 (0:00:02.240) 0:01:32.503 ********** 2026-04-05 00:56:36.159420 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.159430 | orchestrator | 2026-04-05 00:56:36.159439 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-05 00:56:36.159449 | orchestrator | Sunday 05 April 2026 00:51:42 +0000 (0:00:00.673) 0:01:33.177 ********** 2026-04-05 00:56:36.159460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.159484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.159546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.159589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159615 | orchestrator | 2026-04-05 00:56:36.159624 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-05 00:56:36.159634 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:04.241) 0:01:37.418 ********** 2026-04-05 00:56:36.159645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.159722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159780 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.159793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.159808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159831 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.159840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.159862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.159880 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.159888 | orchestrator | 2026-04-05 00:56:36.159896 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-05 00:56:36.159904 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:01.405) 0:01:38.824 ********** 2026-04-05 00:56:36.159912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159935 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.159943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159972 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.159980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.159988 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.159996 | orchestrator | 2026-04-05 00:56:36.160004 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-05 00:56:36.160012 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:00.954) 0:01:39.778 ********** 2026-04-05 00:56:36.160020 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.160028 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.160036 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.160044 | orchestrator | 2026-04-05 00:56:36.160052 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-05 00:56:36.160060 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:01.375) 0:01:41.153 ********** 2026-04-05 00:56:36.160068 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.160076 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.160084 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.160092 | orchestrator | 2026-04-05 00:56:36.160100 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-05 00:56:36.160108 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:02.385) 0:01:43.539 ********** 2026-04-05 00:56:36.160116 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.160123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.160131 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.160151 | orchestrator | 2026-04-05 00:56:36.160160 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-05 00:56:36.160167 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.408) 0:01:43.948 ********** 2026-04-05 00:56:36.160175 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.160200 | orchestrator | 2026-04-05 00:56:36.160209 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-05 00:56:36.160217 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:00.935) 0:01:44.883 ********** 2026-04-05 00:56:36.160242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:36.160258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:36.160281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:36.160290 | orchestrator | 2026-04-05 00:56:36.160298 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-05 00:56:36.160305 | orchestrator | Sunday 05 April 2026 00:51:57 +0000 (0:00:03.481) 0:01:48.364 ********** 2026-04-05 00:56:36.160314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:36.160322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.160330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:36.160343 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.160351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:36.160367 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.160375 | orchestrator | 2026-04-05 00:56:36.160383 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-05 00:56:36.160391 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:02.020) 0:01:50.385 ********** 2026-04-05 00:56:36.160464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160496 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.160504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160521 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.160551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:36.160580 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.160593 | orchestrator | 2026-04-05 00:56:36.160606 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-05 00:56:36.160619 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:02.370) 0:01:52.756 ********** 2026-04-05 00:56:36.160632 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.160646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.160661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.160674 | orchestrator | 2026-04-05 00:56:36.160684 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-05 00:56:36.160697 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:00.445) 0:01:53.202 ********** 2026-04-05 00:56:36.160712 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.160721 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.160729 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.160737 | orchestrator | 2026-04-05 00:56:36.160744 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-05 00:56:36.160752 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:01.421) 0:01:54.624 ********** 2026-04-05 00:56:36.160760 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.160768 | orchestrator | 2026-04-05 00:56:36.160776 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-05 00:56:36.160784 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:01.113) 0:01:55.737 ********** 2026-04-05 00:56:36.160809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.160819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.160869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.160879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.160968 | orchestrator | 2026-04-05 00:56:36.160976 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-05 00:56:36.161006 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:05.020) 0:02:00.758 ********** 2026-04-05 00:56:36.161016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.161025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.161085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161117 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.161125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161137 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.161146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.161159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161206 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.161215 | orchestrator | 2026-04-05 00:56:36.161223 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-05 00:56:36.161231 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:01.197) 0:02:01.955 ********** 2026-04-05 00:56:36.161240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161256 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.161268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161285 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.161314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.161331 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.161346 | orchestrator | 2026-04-05 00:56:36.161355 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-05 00:56:36.161363 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:01.406) 0:02:03.362 ********** 2026-04-05 00:56:36.161371 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.161383 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.161391 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.161399 | orchestrator | 2026-04-05 00:56:36.161408 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-05 00:56:36.161416 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:01.337) 0:02:04.700 ********** 2026-04-05 00:56:36.161423 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.161431 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.161439 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.161447 | orchestrator | 2026-04-05 00:56:36.161455 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-05 00:56:36.161463 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:02.172) 0:02:06.873 ********** 2026-04-05 00:56:36.161471 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.161479 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.161487 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.161494 | orchestrator | 2026-04-05 00:56:36.161502 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-05 00:56:36.161510 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:00.381) 0:02:07.255 ********** 2026-04-05 00:56:36.161518 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.161531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.161539 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.161551 | orchestrator | 2026-04-05 00:56:36.161564 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-05 00:56:36.161577 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:00.299) 0:02:07.554 ********** 2026-04-05 00:56:36.161590 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.161603 | orchestrator | 2026-04-05 00:56:36.161615 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-05 00:56:36.161626 | orchestrator | Sunday 05 April 2026 00:52:18 +0000 (0:00:01.118) 0:02:08.673 ********** 2026-04-05 00:56:36.161638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.161654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.161675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.161763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.161777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.161798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.161827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161897 | orchestrator | 2026-04-05 00:56:36.161905 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-05 00:56:36.161913 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:05.221) 0:02:13.894 ********** 2026-04-05 00:56:36.161926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.161939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.161948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.161968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.161977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.161995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162069 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.162082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.162097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:36.162113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162154 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.162224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.162251 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.162259 | orchestrator | 2026-04-05 00:56:36.162267 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-05 00:56:36.162275 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:00.915) 0:02:14.810 ********** 2026-04-05 00:56:36.162283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162313 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.162321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162329 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.162341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.162363 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.162372 | orchestrator | 2026-04-05 00:56:36.162380 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-05 00:56:36.162388 | orchestrator | Sunday 05 April 2026 00:52:25 +0000 (0:00:01.287) 0:02:16.098 ********** 2026-04-05 00:56:36.162396 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.162403 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.162411 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.162419 | orchestrator | 2026-04-05 00:56:36.162427 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-05 00:56:36.162435 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:01.368) 0:02:17.466 ********** 2026-04-05 00:56:36.162443 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.162450 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.162458 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.162466 | orchestrator | 2026-04-05 00:56:36.162474 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-05 00:56:36.162486 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:02.314) 0:02:19.781 ********** 2026-04-05 00:56:36.162494 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.162502 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.162509 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.162517 | orchestrator | 2026-04-05 00:56:36.162525 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-05 00:56:36.162533 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.383) 0:02:20.164 ********** 2026-04-05 00:56:36.162541 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.162549 | orchestrator | 2026-04-05 00:56:36.162556 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-05 00:56:36.162564 | orchestrator | Sunday 05 April 2026 00:52:30 +0000 (0:00:00.815) 0:02:20.980 ********** 2026-04-05 00:56:36.162573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:36.162606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:36.162630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:36.162664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162678 | orchestrator | 2026-04-05 00:56:36.162686 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-05 00:56:36.162694 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:05.665) 0:02:26.645 ********** 2026-04-05 00:56:36.162708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:36.162721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162735 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.162749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:36.162759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:36.162780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162788 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.162795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.162807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.162814 | orchestrator | 2026-04-05 00:56:36.162821 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-05 00:56:36.162827 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:03.846) 0:02:30.492 ********** 2026-04-05 00:56:36.162837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162885 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.162893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162900 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.162907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:36.162925 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.162932 | orchestrator | 2026-04-05 00:56:36.162939 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-05 00:56:36.162945 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:03.931) 0:02:34.424 ********** 2026-04-05 00:56:36.162952 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.162959 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.162965 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.162972 | orchestrator | 2026-04-05 00:56:36.162979 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-05 00:56:36.162985 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:01.580) 0:02:36.004 ********** 2026-04-05 00:56:36.162992 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.162998 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.163005 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.163011 | orchestrator | 2026-04-05 00:56:36.163018 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-05 00:56:36.163052 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:02.306) 0:02:38.311 ********** 2026-04-05 00:56:36.163059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163066 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163072 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163079 | orchestrator | 2026-04-05 00:56:36.163086 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-05 00:56:36.163095 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.347) 0:02:38.659 ********** 2026-04-05 00:56:36.163102 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.163109 | orchestrator | 2026-04-05 00:56:36.163116 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-05 00:56:36.163122 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.840) 0:02:39.499 ********** 2026-04-05 00:56:36.163144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.163156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.163167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.163174 | orchestrator | 2026-04-05 00:56:36.163193 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-05 00:56:36.163201 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:03.632) 0:02:43.132 ********** 2026-04-05 00:56:36.163208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.163221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.163228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163235 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.163257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163264 | orchestrator | 2026-04-05 00:56:36.163274 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-05 00:56:36.163281 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:00.407) 0:02:43.539 ********** 2026-04-05 00:56:36.163293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163307 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163327 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.163348 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163355 | orchestrator | 2026-04-05 00:56:36.163361 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-05 00:56:36.163368 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:00.759) 0:02:44.299 ********** 2026-04-05 00:56:36.163375 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.163381 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.163388 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.163395 | orchestrator | 2026-04-05 00:56:36.163401 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-05 00:56:36.163408 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:01.478) 0:02:45.777 ********** 2026-04-05 00:56:36.163414 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.163421 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.163428 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.163434 | orchestrator | 2026-04-05 00:56:36.163441 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-05 00:56:36.163447 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:01.880) 0:02:47.657 ********** 2026-04-05 00:56:36.163454 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163461 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163468 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163474 | orchestrator | 2026-04-05 00:56:36.163481 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-05 00:56:36.163488 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:00.594) 0:02:48.252 ********** 2026-04-05 00:56:36.163495 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.163501 | orchestrator | 2026-04-05 00:56:36.163508 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-05 00:56:36.163515 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:00.943) 0:02:49.196 ********** 2026-04-05 00:56:36.163531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:36.163548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:36.163565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:36.163573 | orchestrator | 2026-04-05 00:56:36.163580 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-05 00:56:36.163587 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:04.850) 0:02:54.047 ********** 2026-04-05 00:56:36.163597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:36.163610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:36.163631 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:36.163665 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163672 | orchestrator | 2026-04-05 00:56:36.163679 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-05 00:56:36.163686 | orchestrator | Sunday 05 April 2026 00:53:04 +0000 (0:00:01.541) 0:02:55.588 ********** 2026-04-05 00:56:36.163693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:36.163736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163747 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 00:56:36.163790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:36.163804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:36.163811 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:36.163824 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163831 | orchestrator | 2026-04-05 00:56:36.163838 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-05 00:56:36.163844 | orchestrator | Sunday 05 April 2026 00:53:06 +0000 (0:00:01.490) 0:02:57.079 ********** 2026-04-05 00:56:36.163851 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.163857 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.163864 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.163871 | orchestrator | 2026-04-05 00:56:36.163877 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-05 00:56:36.163884 | orchestrator | Sunday 05 April 2026 00:53:07 +0000 (0:00:01.347) 0:02:58.426 ********** 2026-04-05 00:56:36.163891 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.163897 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.163904 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.163910 | orchestrator | 2026-04-05 00:56:36.163917 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-05 00:56:36.163924 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:02.222) 0:03:00.649 ********** 2026-04-05 00:56:36.163930 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163937 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163947 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163953 | orchestrator | 2026-04-05 00:56:36.163960 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-05 00:56:36.163967 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.735) 0:03:01.385 ********** 2026-04-05 00:56:36.163974 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.163980 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.163987 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.163993 | orchestrator | 2026-04-05 00:56:36.164000 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-05 00:56:36.164007 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:00.446) 0:03:01.831 ********** 2026-04-05 00:56:36.164013 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.164020 | orchestrator | 2026-04-05 00:56:36.164027 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-05 00:56:36.164033 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:00.948) 0:03:02.780 ********** 2026-04-05 00:56:36.164044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:56:36.164055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:56:36.164071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:56:36.164114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164135 | orchestrator | 2026-04-05 00:56:36.164142 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-05 00:56:36.164148 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:04.654) 0:03:07.434 ********** 2026-04-05 00:56:36.164156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:56:36.164166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164193 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.164205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:56:36.164217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164231 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.164242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:56:36.164250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:36.164496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:36.164522 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.164534 | orchestrator | 2026-04-05 00:56:36.164545 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-05 00:56:36.164565 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:00.709) 0:03:08.144 ********** 2026-04-05 00:56:36.164577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164601 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.164613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164634 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.164641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 00:56:36.164655 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.164662 | orchestrator | 2026-04-05 00:56:36.164669 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-05 00:56:36.164676 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.924) 0:03:09.069 ********** 2026-04-05 00:56:36.164682 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.164689 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.164696 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.164702 | orchestrator | 2026-04-05 00:56:36.164709 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-05 00:56:36.164721 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:01.560) 0:03:10.630 ********** 2026-04-05 00:56:36.164728 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.164735 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.164742 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.164748 | orchestrator | 2026-04-05 00:56:36.164755 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-05 00:56:36.164762 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.899) 0:03:12.529 ********** 2026-04-05 00:56:36.164768 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.164780 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.164792 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.164841 | orchestrator | 2026-04-05 00:56:36.164854 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-05 00:56:36.164883 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:00.698) 0:03:13.228 ********** 2026-04-05 00:56:36.164890 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.164897 | orchestrator | 2026-04-05 00:56:36.164904 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-05 00:56:36.164910 | orchestrator | Sunday 05 April 2026 00:53:24 +0000 (0:00:01.534) 0:03:14.763 ********** 2026-04-05 00:56:36.164931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.164940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.164948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.164959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.164970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.164983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.164990 | orchestrator | 2026-04-05 00:56:36.164997 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-05 00:56:36.165004 | orchestrator | Sunday 05 April 2026 00:53:30 +0000 (0:00:06.739) 0:03:21.503 ********** 2026-04-05 00:56:36.165011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.165065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165088 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.165098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165126 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.165140 | orchestrator | 2026-04-05 00:56:36.165152 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-05 00:56:36.165164 | orchestrator | Sunday 05 April 2026 00:53:32 +0000 (0:00:01.458) 0:03:22.961 ********** 2026-04-05 00:56:36.165176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165248 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.165259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165283 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.165293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165324 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.165336 | orchestrator | 2026-04-05 00:56:36.165348 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-05 00:56:36.165359 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.115) 0:03:24.077 ********** 2026-04-05 00:56:36.165371 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.165383 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.165409 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.165422 | orchestrator | 2026-04-05 00:56:36.165434 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-05 00:56:36.165446 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:01.608) 0:03:25.686 ********** 2026-04-05 00:56:36.165458 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.165467 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.165475 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.165482 | orchestrator | 2026-04-05 00:56:36.165488 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-05 00:56:36.165495 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:02.354) 0:03:28.040 ********** 2026-04-05 00:56:36.165502 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.165508 | orchestrator | 2026-04-05 00:56:36.165515 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-05 00:56:36.165522 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:01.313) 0:03:29.354 ********** 2026-04-05 00:56:36.165529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.165537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.165585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.165607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165647 | orchestrator | 2026-04-05 00:56:36.165654 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-05 00:56:36.165660 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:05.661) 0:03:35.016 ********** 2026-04-05 00:56:36.165667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165706 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.165713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165749 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.165759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.165766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.165791 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.165798 | orchestrator | 2026-04-05 00:56:36.165805 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-05 00:56:36.165812 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:00.946) 0:03:35.963 ********** 2026-04-05 00:56:36.165821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.165867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165890 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.165901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.165924 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.165934 | orchestrator | 2026-04-05 00:56:36.165950 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-05 00:56:36.165961 | orchestrator | Sunday 05 April 2026 00:53:46 +0000 (0:00:01.416) 0:03:37.379 ********** 2026-04-05 00:56:36.165972 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.165982 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.165993 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.166005 | orchestrator | 2026-04-05 00:56:36.166059 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-05 00:56:36.166074 | orchestrator | Sunday 05 April 2026 00:53:48 +0000 (0:00:01.232) 0:03:38.612 ********** 2026-04-05 00:56:36.166086 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.166098 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.166110 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.166122 | orchestrator | 2026-04-05 00:56:36.166134 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-05 00:56:36.166141 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:02.163) 0:03:40.776 ********** 2026-04-05 00:56:36.166155 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.166162 | orchestrator | 2026-04-05 00:56:36.166169 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-05 00:56:36.166175 | orchestrator | Sunday 05 April 2026 00:53:51 +0000 (0:00:01.454) 0:03:42.231 ********** 2026-04-05 00:56:36.166226 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:56:36.166235 | orchestrator | 2026-04-05 00:56:36.166242 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-05 00:56:36.166249 | orchestrator | Sunday 05 April 2026 00:53:53 +0000 (0:00:01.481) 0:03:43.712 ********** 2026-04-05 00:56:36.166262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166314 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.166324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166350 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166362 | orchestrator | 2026-04-05 00:56:36.166374 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-05 00:56:36.166393 | orchestrator | Sunday 05 April 2026 00:53:55 +0000 (0:00:02.293) 0:03:46.005 ********** 2026-04-05 00:56:36.166406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166437 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166492 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:36.166524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:36.166542 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.166549 | orchestrator | 2026-04-05 00:56:36.166560 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-05 00:56:36.166567 | orchestrator | Sunday 05 April 2026 00:53:57 +0000 (0:00:02.283) 0:03:48.289 ********** 2026-04-05 00:56:36.166575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166596 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166632 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:36.166678 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.166688 | orchestrator | 2026-04-05 00:56:36.166699 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-05 00:56:36.166711 | orchestrator | Sunday 05 April 2026 00:54:00 +0000 (0:00:02.349) 0:03:50.638 ********** 2026-04-05 00:56:36.166722 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.166733 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.166739 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.166745 | orchestrator | 2026-04-05 00:56:36.166780 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-05 00:56:36.166787 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:01.807) 0:03:52.446 ********** 2026-04-05 00:56:36.166793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.166811 | orchestrator | 2026-04-05 00:56:36.166818 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-05 00:56:36.166824 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:01.361) 0:03:53.808 ********** 2026-04-05 00:56:36.166830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166836 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.166848 | orchestrator | 2026-04-05 00:56:36.166855 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-05 00:56:36.166861 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:00.312) 0:03:54.120 ********** 2026-04-05 00:56:36.166867 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.166873 | orchestrator | 2026-04-05 00:56:36.166879 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-05 00:56:36.166886 | orchestrator | Sunday 05 April 2026 00:54:04 +0000 (0:00:01.066) 0:03:55.187 ********** 2026-04-05 00:56:36.166893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:36.166899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:36.166910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:36.166947 | orchestrator | 2026-04-05 00:56:36.166953 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-05 00:56:36.166960 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:01.881) 0:03:57.068 ********** 2026-04-05 00:56:36.166970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:36.166977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:36.166983 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.166990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.166996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:36.167003 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.167009 | orchestrator | 2026-04-05 00:56:36.167015 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-05 00:56:36.167021 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:00.382) 0:03:57.451 ********** 2026-04-05 00:56:36.167028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:36.167035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:36.167046 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.167052 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.167058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:36.167065 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.167071 | orchestrator | 2026-04-05 00:56:36.167077 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-05 00:56:36.167087 | orchestrator | Sunday 05 April 2026 00:54:07 +0000 (0:00:00.831) 0:03:58.283 ********** 2026-04-05 00:56:36.167097 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.167113 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.167125 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.167146 | orchestrator | 2026-04-05 00:56:36.167156 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-05 00:56:36.167166 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.861) 0:03:59.144 ********** 2026-04-05 00:56:36.167177 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.167204 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.167216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.167226 | orchestrator | 2026-04-05 00:56:36.167236 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-05 00:56:36.167247 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:01.361) 0:04:00.506 ********** 2026-04-05 00:56:36.167258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.167268 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.167278 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.167289 | orchestrator | 2026-04-05 00:56:36.167296 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-05 00:56:36.167302 | orchestrator | Sunday 05 April 2026 00:54:10 +0000 (0:00:00.319) 0:04:00.825 ********** 2026-04-05 00:56:36.167308 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.167314 | orchestrator | 2026-04-05 00:56:36.167320 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-05 00:56:36.167327 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:01.278) 0:04:02.103 ********** 2026-04-05 00:56:36.167341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.167349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.167374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.167386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.167419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.167446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.167473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.167483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.167490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.167527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.167534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.167565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.167582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.167599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.167836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.167878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.167886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.167921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.167930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.167947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.167959 | orchestrator | 2026-04-05 00:56:36.167965 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-05 00:56:36.167972 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:04.569) 0:04:06.672 ********** 2026-04-05 00:56:36.167978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.167985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.167995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.168006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.168017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.168047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.168057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.168086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.168107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.168114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.168181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.168202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168208 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.168215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.168257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.168272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 00:56:36.168641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.168665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 00:56:36.168678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168703 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.168711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 00:56:36.168745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 00:56:36.168772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 00:56:36.168795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.168807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:36.168822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:36.168830 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.168837 | orchestrator | 2026-04-05 00:56:36.168845 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-05 00:56:36.168858 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:01.742) 0:04:08.415 ********** 2026-04-05 00:56:36.168866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.168889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.168923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.168929 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.168936 | orchestrator | 2026-04-05 00:56:36.168942 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-05 00:56:36.168948 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:01.566) 0:04:09.982 ********** 2026-04-05 00:56:36.168968 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.168974 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.168981 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.168987 | orchestrator | 2026-04-05 00:56:36.168993 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-05 00:56:36.168999 | orchestrator | Sunday 05 April 2026 00:54:21 +0000 (0:00:01.637) 0:04:11.619 ********** 2026-04-05 00:56:36.169005 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.169011 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.169017 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.169024 | orchestrator | 2026-04-05 00:56:36.169030 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-05 00:56:36.169036 | orchestrator | Sunday 05 April 2026 00:54:23 +0000 (0:00:02.163) 0:04:13.782 ********** 2026-04-05 00:56:36.169042 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.169048 | orchestrator | 2026-04-05 00:56:36.169067 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-05 00:56:36.169077 | orchestrator | Sunday 05 April 2026 00:54:24 +0000 (0:00:01.171) 0:04:14.953 ********** 2026-04-05 00:56:36.169085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.169099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.169131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.169145 | orchestrator | 2026-04-05 00:56:36.169155 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-05 00:56:36.169164 | orchestrator | Sunday 05 April 2026 00:54:27 +0000 (0:00:03.622) 0:04:18.576 ********** 2026-04-05 00:56:36.169176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.169223 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.169231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.169238 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.169245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.169257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.169264 | orchestrator | 2026-04-05 00:56:36.169270 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-05 00:56:36.169276 | orchestrator | Sunday 05 April 2026 00:54:29 +0000 (0:00:01.263) 0:04:19.840 ********** 2026-04-05 00:56:36.169287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169300 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.169307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169320 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.169331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.169344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.169350 | orchestrator | 2026-04-05 00:56:36.169356 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-05 00:56:36.169363 | orchestrator | Sunday 05 April 2026 00:54:30 +0000 (0:00:00.803) 0:04:20.643 ********** 2026-04-05 00:56:36.169369 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.169375 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.169381 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.169387 | orchestrator | 2026-04-05 00:56:36.169394 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-05 00:56:36.169400 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:01.243) 0:04:21.887 ********** 2026-04-05 00:56:36.169406 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.169412 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.169419 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.169429 | orchestrator | 2026-04-05 00:56:36.169436 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-05 00:56:36.169442 | orchestrator | Sunday 05 April 2026 00:54:33 +0000 (0:00:02.366) 0:04:24.254 ********** 2026-04-05 00:56:36.169448 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.169454 | orchestrator | 2026-04-05 00:56:36.169461 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-05 00:56:36.169467 | orchestrator | Sunday 05 April 2026 00:54:35 +0000 (0:00:01.568) 0:04:25.823 ********** 2026-04-05 00:56:36.169474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.169565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169581 | orchestrator | 2026-04-05 00:56:36.169587 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-05 00:56:36.169593 | orchestrator | Sunday 05 April 2026 00:54:40 +0000 (0:00:05.744) 0:04:31.567 ********** 2026-04-05 00:56:36.169731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.169771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.169810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.169830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.169842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.169848 | orchestrator | 2026-04-05 00:56:36.169853 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-05 00:56:36.169859 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:01.245) 0:04:32.813 ********** 2026-04-05 00:56:36.169864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169887 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.169893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169923 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.169928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.169954 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.169959 | orchestrator | 2026-04-05 00:56:36.169964 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-05 00:56:36.169970 | orchestrator | Sunday 05 April 2026 00:54:44 +0000 (0:00:01.810) 0:04:34.624 ********** 2026-04-05 00:56:36.169980 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.169994 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.170004 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.170013 | orchestrator | 2026-04-05 00:56:36.170048 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-05 00:56:36.170059 | orchestrator | Sunday 05 April 2026 00:54:45 +0000 (0:00:01.289) 0:04:35.913 ********** 2026-04-05 00:56:36.170068 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.170078 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.170088 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.170100 | orchestrator | 2026-04-05 00:56:36.170112 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-05 00:56:36.170121 | orchestrator | Sunday 05 April 2026 00:54:47 +0000 (0:00:02.021) 0:04:37.935 ********** 2026-04-05 00:56:36.170131 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.170141 | orchestrator | 2026-04-05 00:56:36.170150 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-05 00:56:36.170158 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:01.168) 0:04:39.103 ********** 2026-04-05 00:56:36.170163 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-05 00:56:36.170169 | orchestrator | 2026-04-05 00:56:36.170175 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-05 00:56:36.170180 | orchestrator | Sunday 05 April 2026 00:54:49 +0000 (0:00:00.987) 0:04:40.091 ********** 2026-04-05 00:56:36.170203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:36.170261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:36.170303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:36.170312 | orchestrator | 2026-04-05 00:56:36.170318 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-05 00:56:36.170324 | orchestrator | Sunday 05 April 2026 00:54:53 +0000 (0:00:04.022) 0:04:44.113 ********** 2026-04-05 00:56:36.170335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.170347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.170359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170365 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.170372 | orchestrator | 2026-04-05 00:56:36.170379 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-05 00:56:36.170386 | orchestrator | Sunday 05 April 2026 00:54:54 +0000 (0:00:01.400) 0:04:45.514 ********** 2026-04-05 00:56:36.170393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170407 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.170414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170432 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.170439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:36.170456 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.170462 | orchestrator | 2026-04-05 00:56:36.170469 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:36.170475 | orchestrator | Sunday 05 April 2026 00:54:56 +0000 (0:00:01.979) 0:04:47.493 ********** 2026-04-05 00:56:36.170481 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.170488 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.170494 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.170500 | orchestrator | 2026-04-05 00:56:36.170507 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:36.170514 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:02.767) 0:04:50.261 ********** 2026-04-05 00:56:36.170520 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.170527 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.170533 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.170540 | orchestrator | 2026-04-05 00:56:36.170546 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-05 00:56:36.170553 | orchestrator | Sunday 05 April 2026 00:55:03 +0000 (0:00:03.473) 0:04:53.735 ********** 2026-04-05 00:56:36.170560 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-05 00:56:36.170566 | orchestrator | 2026-04-05 00:56:36.170573 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-05 00:56:36.170580 | orchestrator | Sunday 05 April 2026 00:55:03 +0000 (0:00:00.860) 0:04:54.595 ********** 2026-04-05 00:56:36.170590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170785 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.170794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.170805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170816 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.170821 | orchestrator | 2026-04-05 00:56:36.170827 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-05 00:56:36.170832 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:01.492) 0:04:56.088 ********** 2026-04-05 00:56:36.170838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170844 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.170853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170859 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.170864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:36.170870 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.170875 | orchestrator | 2026-04-05 00:56:36.170881 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-05 00:56:36.170886 | orchestrator | Sunday 05 April 2026 00:55:07 +0000 (0:00:01.694) 0:04:57.783 ********** 2026-04-05 00:56:36.170891 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.170897 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.170902 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.170907 | orchestrator | 2026-04-05 00:56:36.170913 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:36.170918 | orchestrator | Sunday 05 April 2026 00:55:08 +0000 (0:00:01.420) 0:04:59.204 ********** 2026-04-05 00:56:36.170924 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.170932 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.170938 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.170943 | orchestrator | 2026-04-05 00:56:36.170949 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:36.170954 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:02.410) 0:05:01.614 ********** 2026-04-05 00:56:36.170960 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.170965 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.170971 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.170976 | orchestrator | 2026-04-05 00:56:36.170981 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-05 00:56:36.170987 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:02.984) 0:05:04.598 ********** 2026-04-05 00:56:36.170997 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-05 00:56:36.171003 | orchestrator | 2026-04-05 00:56:36.171008 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-05 00:56:36.171014 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:01.479) 0:05:06.078 ********** 2026-04-05 00:56:36.171019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171025 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.171031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.171042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171048 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.171053 | orchestrator | 2026-04-05 00:56:36.171059 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-05 00:56:36.171092 | orchestrator | Sunday 05 April 2026 00:55:16 +0000 (0:00:01.435) 0:05:07.513 ********** 2026-04-05 00:56:36.171102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171108 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.171145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171153 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.171206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:36.171220 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.171226 | orchestrator | 2026-04-05 00:56:36.171231 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-05 00:56:36.171236 | orchestrator | Sunday 05 April 2026 00:55:18 +0000 (0:00:01.372) 0:05:08.886 ********** 2026-04-05 00:56:36.171242 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.171247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.171253 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.171258 | orchestrator | 2026-04-05 00:56:36.171263 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:36.171269 | orchestrator | Sunday 05 April 2026 00:55:20 +0000 (0:00:01.858) 0:05:10.745 ********** 2026-04-05 00:56:36.171278 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.171339 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.171348 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.171355 | orchestrator | 2026-04-05 00:56:36.171364 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:36.171593 | orchestrator | Sunday 05 April 2026 00:55:22 +0000 (0:00:02.456) 0:05:13.202 ********** 2026-04-05 00:56:36.171600 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.171605 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.171611 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.171616 | orchestrator | 2026-04-05 00:56:36.171621 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-05 00:56:36.171627 | orchestrator | Sunday 05 April 2026 00:55:25 +0000 (0:00:03.061) 0:05:16.264 ********** 2026-04-05 00:56:36.171632 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.171638 | orchestrator | 2026-04-05 00:56:36.171643 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-05 00:56:36.171697 | orchestrator | Sunday 05 April 2026 00:55:26 +0000 (0:00:01.240) 0:05:17.504 ********** 2026-04-05 00:56:36.171704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:36.171715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.171721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.171752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:36.171758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:36.171766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.171776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.171785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.171822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.171828 | orchestrator | 2026-04-05 00:56:36.171865 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-05 00:56:36.171872 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:03.689) 0:05:21.194 ********** 2026-04-05 00:56:36.171882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:36.171888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.171894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.171905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.171915 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.171924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:36.171966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.172116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.172201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.172226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.172243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.172331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:36.172343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:36.172376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.172388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:36.172397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:36.172407 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.172416 | orchestrator | 2026-04-05 00:56:36.172425 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-05 00:56:36.172435 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.787) 0:05:21.982 ********** 2026-04-05 00:56:36.172452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172472 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.172482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172505 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.172514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:36.172582 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.172594 | orchestrator | 2026-04-05 00:56:36.172603 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-05 00:56:36.172612 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:00.948) 0:05:22.930 ********** 2026-04-05 00:56:36.172621 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.172631 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.172640 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.172650 | orchestrator | 2026-04-05 00:56:36.172659 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-05 00:56:36.172669 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:01.231) 0:05:24.162 ********** 2026-04-05 00:56:36.172679 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.172688 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.173318 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.173335 | orchestrator | 2026-04-05 00:56:36.173374 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-05 00:56:36.173385 | orchestrator | Sunday 05 April 2026 00:55:35 +0000 (0:00:02.054) 0:05:26.217 ********** 2026-04-05 00:56:36.173394 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.173403 | orchestrator | 2026-04-05 00:56:36.173412 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-05 00:56:36.173422 | orchestrator | Sunday 05 April 2026 00:55:37 +0000 (0:00:01.476) 0:05:27.693 ********** 2026-04-05 00:56:36.173432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.173451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.173466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.173501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:56:36.173513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:56:36.173529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:56:36.173539 | orchestrator | 2026-04-05 00:56:36.173549 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-05 00:56:36.173559 | orchestrator | Sunday 05 April 2026 00:55:42 +0000 (0:00:05.202) 0:05:32.895 ********** 2026-04-05 00:56:36.173572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.173608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.173621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:56:36.173636 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.173645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:56:36.173654 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.173667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.173701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:56:36.173719 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.173729 | orchestrator | 2026-04-05 00:56:36.173739 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-05 00:56:36.173749 | orchestrator | Sunday 05 April 2026 00:55:43 +0000 (0:00:01.147) 0:05:34.043 ********** 2026-04-05 00:56:36.173758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.173768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173785 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.173794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.173802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173820 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.173832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.173840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 00:56:36.173857 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.173865 | orchestrator | 2026-04-05 00:56:36.173873 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-05 00:56:36.173881 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:01.324) 0:05:35.368 ********** 2026-04-05 00:56:36.173889 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.173898 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.173907 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.173915 | orchestrator | 2026-04-05 00:56:36.173923 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-05 00:56:36.173953 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:00.496) 0:05:35.864 ********** 2026-04-05 00:56:36.173967 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.173976 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.173985 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.173993 | orchestrator | 2026-04-05 00:56:36.174002 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-05 00:56:36.174011 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:01.403) 0:05:37.268 ********** 2026-04-05 00:56:36.174041 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.174050 | orchestrator | 2026-04-05 00:56:36.174058 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-05 00:56:36.174067 | orchestrator | Sunday 05 April 2026 00:55:48 +0000 (0:00:01.779) 0:05:39.048 ********** 2026-04-05 00:56:36.174077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:56:36.174087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:56:36.174160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 00:56:36.174267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.174311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.174323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:56:36.174392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174429 | orchestrator | 2026-04-05 00:56:36.174434 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-05 00:56:36.174439 | orchestrator | Sunday 05 April 2026 00:55:52 +0000 (0:00:04.348) 0:05:43.396 ********** 2026-04-05 00:56:36.174444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:56:36.174450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.174496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:56:36.174504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.174573 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174601 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 00:56:36.174617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:36.174625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:56:36.174651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 00:56:36.174657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:36.174670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:36.174675 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174680 | orchestrator | 2026-04-05 00:56:36.174684 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-05 00:56:36.174689 | orchestrator | Sunday 05 April 2026 00:55:53 +0000 (0:00:01.017) 0:05:44.414 ********** 2026-04-05 00:56:36.174695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174722 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174750 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 00:56:36.174767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 00:56:36.174777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174785 | orchestrator | 2026-04-05 00:56:36.174790 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-05 00:56:36.174795 | orchestrator | Sunday 05 April 2026 00:55:55 +0000 (0:00:01.395) 0:05:45.810 ********** 2026-04-05 00:56:36.174800 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174805 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174809 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174814 | orchestrator | 2026-04-05 00:56:36.174819 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-05 00:56:36.174824 | orchestrator | Sunday 05 April 2026 00:55:55 +0000 (0:00:00.502) 0:05:46.312 ********** 2026-04-05 00:56:36.174829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174833 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174838 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174843 | orchestrator | 2026-04-05 00:56:36.174848 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-05 00:56:36.174853 | orchestrator | Sunday 05 April 2026 00:55:57 +0000 (0:00:01.378) 0:05:47.691 ********** 2026-04-05 00:56:36.174857 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.174862 | orchestrator | 2026-04-05 00:56:36.174867 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-05 00:56:36.174872 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:01.480) 0:05:49.171 ********** 2026-04-05 00:56:36.174879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:36.174887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:36.174893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:36.174901 | orchestrator | 2026-04-05 00:56:36.174906 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-05 00:56:36.174911 | orchestrator | Sunday 05 April 2026 00:56:01 +0000 (0:00:03.129) 0:05:52.301 ********** 2026-04-05 00:56:36.174916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:36.174924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:36.174929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174934 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:36.174947 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174952 | orchestrator | 2026-04-05 00:56:36.174957 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-05 00:56:36.174964 | orchestrator | Sunday 05 April 2026 00:56:02 +0000 (0:00:00.464) 0:05:52.765 ********** 2026-04-05 00:56:36.174969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:36.174974 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.174979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:36.174984 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.174989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:36.174994 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.174999 | orchestrator | 2026-04-05 00:56:36.175004 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-05 00:56:36.175008 | orchestrator | Sunday 05 April 2026 00:56:02 +0000 (0:00:00.705) 0:05:53.471 ********** 2026-04-05 00:56:36.175013 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175018 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175023 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175028 | orchestrator | 2026-04-05 00:56:36.175032 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-05 00:56:36.175037 | orchestrator | Sunday 05 April 2026 00:56:03 +0000 (0:00:00.523) 0:05:53.994 ********** 2026-04-05 00:56:36.175042 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175047 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175056 | orchestrator | 2026-04-05 00:56:36.175061 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-05 00:56:36.175066 | orchestrator | Sunday 05 April 2026 00:56:04 +0000 (0:00:01.254) 0:05:55.249 ********** 2026-04-05 00:56:36.175071 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.175075 | orchestrator | 2026-04-05 00:56:36.175080 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-05 00:56:36.175085 | orchestrator | Sunday 05 April 2026 00:56:06 +0000 (0:00:01.624) 0:05:56.874 ********** 2026-04-05 00:56:36.175092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:56:36.175100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:56:36.175109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 00:56:36.175115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.175122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.175131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 00:56:36.175139 | orchestrator | 2026-04-05 00:56:36.175144 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-05 00:56:36.175149 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:06.416) 0:06:03.290 ********** 2026-04-05 00:56:36.175154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:56:36.175160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.175165 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:56:36.175197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.175207 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 00:56:36.175225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 00:56:36.175234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175240 | orchestrator | 2026-04-05 00:56:36.175245 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-05 00:56:36.175249 | orchestrator | Sunday 05 April 2026 00:56:13 +0000 (0:00:01.200) 0:06:04.490 ********** 2026-04-05 00:56:36.175257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175282 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175310 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 00:56:36.175324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 00:56:36.175334 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175339 | orchestrator | 2026-04-05 00:56:36.175344 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-05 00:56:36.175349 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:01.464) 0:06:05.955 ********** 2026-04-05 00:56:36.175353 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.175358 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.175363 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.175368 | orchestrator | 2026-04-05 00:56:36.175372 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-05 00:56:36.175382 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:01.287) 0:06:07.242 ********** 2026-04-05 00:56:36.175387 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:36.175392 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:36.175396 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:36.175401 | orchestrator | 2026-04-05 00:56:36.175406 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-05 00:56:36.175411 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:02.252) 0:06:09.495 ********** 2026-04-05 00:56:36.175416 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175423 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175428 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175432 | orchestrator | 2026-04-05 00:56:36.175437 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-05 00:56:36.175442 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.351) 0:06:09.846 ********** 2026-04-05 00:56:36.175447 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175451 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175456 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175461 | orchestrator | 2026-04-05 00:56:36.175466 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-05 00:56:36.175471 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.675) 0:06:10.521 ********** 2026-04-05 00:56:36.175475 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175480 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175485 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175490 | orchestrator | 2026-04-05 00:56:36.175494 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-05 00:56:36.175499 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.353) 0:06:10.875 ********** 2026-04-05 00:56:36.175504 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175509 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175513 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175518 | orchestrator | 2026-04-05 00:56:36.175523 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-05 00:56:36.175528 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.358) 0:06:11.234 ********** 2026-04-05 00:56:36.175533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175537 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175544 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175549 | orchestrator | 2026-04-05 00:56:36.175554 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-05 00:56:36.175559 | orchestrator | Sunday 05 April 2026 00:56:21 +0000 (0:00:00.374) 0:06:11.608 ********** 2026-04-05 00:56:36.175564 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:36.175568 | orchestrator | 2026-04-05 00:56:36.175573 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 00:56:36.175578 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:01.899) 0:06:13.508 ********** 2026-04-05 00:56:36.175583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:36.175623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.175628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.175637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:36.175642 | orchestrator | 2026-04-05 00:56:36.175647 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 00:56:36.175652 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:02.237) 0:06:15.745 ********** 2026-04-05 00:56:36.175657 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:56:36.175661 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.175666 | orchestrator | } 2026-04-05 00:56:36.175671 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:56:36.175676 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.175681 | orchestrator | } 2026-04-05 00:56:36.175686 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:56:36.175690 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:56:36.175695 | orchestrator | } 2026-04-05 00:56:36.175700 | orchestrator | 2026-04-05 00:56:36.175705 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:56:36.175710 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:00.329) 0:06:16.075 ********** 2026-04-05 00:56:36.175717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.175722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.175730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.175735 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:36.175740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.175748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.175753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.175758 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:36.175763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:36.175771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:36.175776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:36.175781 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:36.175786 | orchestrator | 2026-04-05 00:56:36.175790 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-05 00:56:36.175797 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:01.492) 0:06:17.567 ********** 2026-04-05 00:56:36.175802 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.175807 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.175812 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.175820 | orchestrator | 2026-04-05 00:56:36.175825 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-05 00:56:36.175829 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:01.009) 0:06:18.576 ********** 2026-04-05 00:56:36.175834 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.175839 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.175844 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.175849 | orchestrator | 2026-04-05 00:56:36.175854 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-05 00:56:36.175858 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.328) 0:06:18.904 ********** 2026-04-05 00:56:36.175863 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.175868 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.175873 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.175877 | orchestrator | 2026-04-05 00:56:36.175882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-05 00:56:36.175887 | orchestrator | Sunday 05 April 2026 00:56:29 +0000 (0:00:00.852) 0:06:19.756 ********** 2026-04-05 00:56:36.175892 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.175896 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.175901 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.175906 | orchestrator | 2026-04-05 00:56:36.175911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-05 00:56:36.175916 | orchestrator | Sunday 05 April 2026 00:56:29 +0000 (0:00:00.845) 0:06:20.601 ********** 2026-04-05 00:56:36.175920 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:36.175925 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:36.175930 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:36.175934 | orchestrator | 2026-04-05 00:56:36.175939 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-05 00:56:36.175944 | orchestrator | Sunday 05 April 2026 00:56:31 +0000 (0:00:01.239) 0:06:21.841 ********** 2026-04-05 00:56:36.175952 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_rxe8egov/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_rxe8egov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_rxe8egov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_rxe8egov/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:56:36.175966 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ol4_hc1b/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ol4_hc1b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ol4_hc1b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ol4_hc1b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:56:36.175977 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5kinr7qo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5kinr7qo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5kinr7qo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5kinr7qo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:56:36.175986 | orchestrator | 2026-04-05 00:56:36.175991 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:56:36.175996 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-05 00:56:36.176001 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-05 00:56:36.176006 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-05 00:56:36.176011 | orchestrator | 2026-04-05 00:56:36.176016 | orchestrator | 2026-04-05 00:56:36.176020 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:56:36.176025 | orchestrator | Sunday 05 April 2026 00:56:33 +0000 (0:00:02.544) 0:06:24.385 ********** 2026-04-05 00:56:36.176030 | orchestrator | =============================================================================== 2026-04-05 00:56:36.176035 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 6.74s 2026-04-05 00:56:36.176039 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.42s 2026-04-05 00:56:36.176044 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.75s 2026-04-05 00:56:36.176049 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 5.70s 2026-04-05 00:56:36.176054 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.67s 2026-04-05 00:56:36.176058 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.66s 2026-04-05 00:56:36.176063 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.64s 2026-04-05 00:56:36.176068 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.43s 2026-04-05 00:56:36.176073 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.22s 2026-04-05 00:56:36.176078 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.20s 2026-04-05 00:56:36.176082 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.02s 2026-04-05 00:56:36.176087 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.85s 2026-04-05 00:56:36.176092 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.65s 2026-04-05 00:56:36.176096 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.57s 2026-04-05 00:56:36.176101 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.40s 2026-04-05 00:56:36.176106 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.35s 2026-04-05 00:56:36.176111 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.24s 2026-04-05 00:56:36.176118 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.23s 2026-04-05 00:56:36.176123 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.08s 2026-04-05 00:56:36.176128 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.02s 2026-04-05 00:56:36.176135 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:36.176140 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:36.176145 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:36.176150 | orchestrator | 2026-04-05 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:39.208171 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:39.209833 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:39.210607 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:39.210942 | orchestrator | 2026-04-05 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:42.265870 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:42.265962 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:42.266835 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:42.266878 | orchestrator | 2026-04-05 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:45.310767 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:45.311668 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:45.314009 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:45.314104 | orchestrator | 2026-04-05 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:48.363112 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:48.365061 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:48.366142 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:48.366353 | orchestrator | 2026-04-05 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:51.409591 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:51.412375 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:51.414790 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:51.414893 | orchestrator | 2026-04-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:54.455407 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:54.457162 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:54.458450 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:54.458497 | orchestrator | 2026-04-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:57.499892 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:56:57.501775 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:56:57.502930 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:56:57.503060 | orchestrator | 2026-04-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:00.538628 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:00.538965 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:57:00.540544 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:00.540579 | orchestrator | 2026-04-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:03.577493 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:03.578626 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state STARTED 2026-04-05 00:57:03.579448 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:03.579477 | orchestrator | 2026-04-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:06.625411 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:06.628765 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task 430642b9-a27d-465c-b910-7094484443ae is in state SUCCESS 2026-04-05 00:57:06.630838 | orchestrator | 2026-04-05 00:57:06.630902 | orchestrator | 2026-04-05 00:57:06.630927 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:57:06.630949 | orchestrator | 2026-04-05 00:57:06.630969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:57:06.630990 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-04-05 00:57:06.631012 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:57:06.631033 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:57:06.631053 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:57:06.631073 | orchestrator | 2026-04-05 00:57:06.631092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:57:06.631110 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.293) 0:00:00.617 ********** 2026-04-05 00:57:06.631129 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-05 00:57:06.631149 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-05 00:57:06.631196 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-05 00:57:06.631216 | orchestrator | 2026-04-05 00:57:06.631235 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-05 00:57:06.631257 | orchestrator | 2026-04-05 00:57:06.631276 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:57:06.631295 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.313) 0:00:00.931 ********** 2026-04-05 00:57:06.631315 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:57:06.631335 | orchestrator | 2026-04-05 00:57:06.631356 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-05 00:57:06.631403 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.567) 0:00:01.499 ********** 2026-04-05 00:57:06.631568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:57:06.631599 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:57:06.631620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:57:06.631639 | orchestrator | 2026-04-05 00:57:06.631658 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-05 00:57:06.631677 | orchestrator | Sunday 05 April 2026 00:56:39 +0000 (0:00:01.024) 0:00:02.523 ********** 2026-04-05 00:57:06.631820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.631856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.631906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.631923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.631953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.631967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.631980 | orchestrator | 2026-04-05 00:57:06.632064 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:57:06.632085 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:01.309) 0:00:03.833 ********** 2026-04-05 00:57:06.632097 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:57:06.632108 | orchestrator | 2026-04-05 00:57:06.632134 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-05 00:57:06.632146 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:00.620) 0:00:04.453 ********** 2026-04-05 00:57:06.632275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.632312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.632325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.632344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.632367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.632388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.632408 | orchestrator | 2026-04-05 00:57:06.632426 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-05 00:57:06.632446 | orchestrator | Sunday 05 April 2026 00:56:44 +0000 (0:00:02.848) 0:00:07.301 ********** 2026-04-05 00:57:06.632466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632562 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.632578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632596 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:06.632620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632676 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:06.632692 | orchestrator | 2026-04-05 00:57:06.632708 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-05 00:57:06.632724 | orchestrator | Sunday 05 April 2026 00:56:45 +0000 (0:00:01.014) 0:00:08.316 ********** 2026-04-05 00:57:06.632740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632776 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.632800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:06.632875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.632893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.632910 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:06.632927 | orchestrator | 2026-04-05 00:57:06.632943 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-05 00:57:06.632962 | orchestrator | Sunday 05 April 2026 00:56:46 +0000 (0:00:01.183) 0:00:09.499 ********** 2026-04-05 00:57:06.632995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633141 | orchestrator | 2026-04-05 00:57:06.633190 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-05 00:57:06.633208 | orchestrator | Sunday 05 April 2026 00:56:49 +0000 (0:00:02.737) 0:00:12.237 ********** 2026-04-05 00:57:06.633224 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:06.633241 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:06.633257 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:06.633273 | orchestrator | 2026-04-05 00:57:06.633289 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-05 00:57:06.633306 | orchestrator | Sunday 05 April 2026 00:56:52 +0000 (0:00:02.966) 0:00:15.203 ********** 2026-04-05 00:57:06.633322 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:57:06.633339 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:57:06.633355 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:57:06.633371 | orchestrator | 2026-04-05 00:57:06.633387 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-05 00:57:06.633403 | orchestrator | Sunday 05 April 2026 00:56:54 +0000 (0:00:01.583) 0:00:16.787 ********** 2026-04-05 00:57:06.633419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 00:57:06.633515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 00:57:06.633585 | orchestrator | 2026-04-05 00:57:06.633596 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-05 00:57:06.633606 | orchestrator | Sunday 05 April 2026 00:56:56 +0000 (0:00:02.318) 0:00:19.106 ********** 2026-04-05 00:57:06.633615 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:57:06.633625 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:06.633635 | orchestrator | } 2026-04-05 00:57:06.633645 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:57:06.633654 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:06.633664 | orchestrator | } 2026-04-05 00:57:06.633673 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:57:06.633683 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:57:06.633692 | orchestrator | } 2026-04-05 00:57:06.633702 | orchestrator | 2026-04-05 00:57:06.633712 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:57:06.633721 | orchestrator | Sunday 05 April 2026 00:56:56 +0000 (0:00:00.598) 0:00:19.704 ********** 2026-04-05 00:57:06.633731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.633743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.633759 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.633773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.633791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.633802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:06.633812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 00:57:06.633823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 00:57:06.633841 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:06.633859 | orchestrator | 2026-04-05 00:57:06.633875 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:57:06.633892 | orchestrator | Sunday 05 April 2026 00:56:57 +0000 (0:00:01.029) 0:00:20.734 ********** 2026-04-05 00:57:06.633908 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.633924 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:57:06.633941 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:57:06.633959 | orchestrator | 2026-04-05 00:57:06.633977 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:57:06.633995 | orchestrator | Sunday 05 April 2026 00:56:58 +0000 (0:00:00.344) 0:00:21.079 ********** 2026-04-05 00:57:06.634007 | orchestrator | 2026-04-05 00:57:06.634088 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:57:06.634113 | orchestrator | Sunday 05 April 2026 00:56:58 +0000 (0:00:00.071) 0:00:21.151 ********** 2026-04-05 00:57:06.634130 | orchestrator | 2026-04-05 00:57:06.634145 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:57:06.634155 | orchestrator | Sunday 05 April 2026 00:56:58 +0000 (0:00:00.072) 0:00:21.224 ********** 2026-04-05 00:57:06.634195 | orchestrator | 2026-04-05 00:57:06.634207 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-05 00:57:06.634216 | orchestrator | Sunday 05 April 2026 00:56:58 +0000 (0:00:00.097) 0:00:21.321 ********** 2026-04-05 00:57:06.634226 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.634236 | orchestrator | 2026-04-05 00:57:06.634246 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-05 00:57:06.634264 | orchestrator | Sunday 05 April 2026 00:56:59 +0000 (0:00:00.668) 0:00:21.989 ********** 2026-04-05 00:57:06.634274 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:57:06.634284 | orchestrator | 2026-04-05 00:57:06.634293 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-05 00:57:06.634303 | orchestrator | Sunday 05 April 2026 00:56:59 +0000 (0:00:00.219) 0:00:22.208 ********** 2026-04-05 00:57:06.634314 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_acd_3rdy/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_acd_3rdy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_acd_3rdy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_acd_3rdy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:57:06.634348 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_j5ye3acp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_j5ye3acp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_j5ye3acp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_j5ye3acp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:57:06.634361 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qudqs3cv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qudqs3cv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_qudqs3cv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qudqs3cv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:57:06.634378 | orchestrator | 2026-04-05 00:57:06.634392 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:57:06.634402 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-05 00:57:06.634413 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:57:06.634423 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 00:57:06.634433 | orchestrator | 2026-04-05 00:57:06.634442 | orchestrator | 2026-04-05 00:57:06.634457 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:57:06.634467 | orchestrator | Sunday 05 April 2026 00:57:03 +0000 (0:00:04.033) 0:00:26.242 ********** 2026-04-05 00:57:06.634476 | orchestrator | =============================================================================== 2026-04-05 00:57:06.634486 | orchestrator | opensearch : Restart opensearch container ------------------------------- 4.03s 2026-04-05 00:57:06.634495 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.97s 2026-04-05 00:57:06.634505 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2026-04-05 00:57:06.634514 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.74s 2026-04-05 00:57:06.634524 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.32s 2026-04-05 00:57:06.634533 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.58s 2026-04-05 00:57:06.634543 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.31s 2026-04-05 00:57:06.634552 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.18s 2026-04-05 00:57:06.634562 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.03s 2026-04-05 00:57:06.634577 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.02s 2026-04-05 00:57:06.634587 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.01s 2026-04-05 00:57:06.634597 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.67s 2026-04-05 00:57:06.634606 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-05 00:57:06.634616 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.60s 2026-04-05 00:57:06.634625 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-05 00:57:06.634635 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.34s 2026-04-05 00:57:06.634644 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-04-05 00:57:06.634654 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-05 00:57:06.634663 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.24s 2026-04-05 00:57:06.634673 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.22s 2026-04-05 00:57:06.634683 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:06.634693 | orchestrator | 2026-04-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:09.671903 | orchestrator | 2026-04-05 00:57:09 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:09.673999 | orchestrator | 2026-04-05 00:57:09 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:09.674100 | orchestrator | 2026-04-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:12.709611 | orchestrator | 2026-04-05 00:57:12 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:12.710458 | orchestrator | 2026-04-05 00:57:12 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:12.710507 | orchestrator | 2026-04-05 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:15.766115 | orchestrator | 2026-04-05 00:57:15 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:15.766447 | orchestrator | 2026-04-05 00:57:15 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:15.766476 | orchestrator | 2026-04-05 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:18.818133 | orchestrator | 2026-04-05 00:57:18 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:18.819053 | orchestrator | 2026-04-05 00:57:18 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:18.819385 | orchestrator | 2026-04-05 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:21.865421 | orchestrator | 2026-04-05 00:57:21 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:21.868267 | orchestrator | 2026-04-05 00:57:21 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:21.868358 | orchestrator | 2026-04-05 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:24.919964 | orchestrator | 2026-04-05 00:57:24 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:24.920065 | orchestrator | 2026-04-05 00:57:24 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:24.920080 | orchestrator | 2026-04-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:27.966376 | orchestrator | 2026-04-05 00:57:27 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:27.968224 | orchestrator | 2026-04-05 00:57:27 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:27.968265 | orchestrator | 2026-04-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:31.020322 | orchestrator | 2026-04-05 00:57:31 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:31.024790 | orchestrator | 2026-04-05 00:57:31 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:31.025616 | orchestrator | 2026-04-05 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:34.073518 | orchestrator | 2026-04-05 00:57:34 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:34.074313 | orchestrator | 2026-04-05 00:57:34 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:34.074351 | orchestrator | 2026-04-05 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:37.126751 | orchestrator | 2026-04-05 00:57:37 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:37.129440 | orchestrator | 2026-04-05 00:57:37 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:37.129663 | orchestrator | 2026-04-05 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:40.179656 | orchestrator | 2026-04-05 00:57:40 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:40.179751 | orchestrator | 2026-04-05 00:57:40 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:40.179764 | orchestrator | 2026-04-05 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:43.221738 | orchestrator | 2026-04-05 00:57:43 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:43.224496 | orchestrator | 2026-04-05 00:57:43 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:43.224564 | orchestrator | 2026-04-05 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:46.280432 | orchestrator | 2026-04-05 00:57:46 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:46.283052 | orchestrator | 2026-04-05 00:57:46 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:46.283093 | orchestrator | 2026-04-05 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:49.333324 | orchestrator | 2026-04-05 00:57:49 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:49.334594 | orchestrator | 2026-04-05 00:57:49 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:49.334643 | orchestrator | 2026-04-05 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:52.390538 | orchestrator | 2026-04-05 00:57:52 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:52.392113 | orchestrator | 2026-04-05 00:57:52 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:52.392183 | orchestrator | 2026-04-05 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:55.442327 | orchestrator | 2026-04-05 00:57:55 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:55.444532 | orchestrator | 2026-04-05 00:57:55 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:55.444670 | orchestrator | 2026-04-05 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:58.484070 | orchestrator | 2026-04-05 00:57:58 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state STARTED 2026-04-05 00:57:58.484226 | orchestrator | 2026-04-05 00:57:58 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:57:58.484244 | orchestrator | 2026-04-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:01.542377 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:01.545929 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:01.552492 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task 68d82e1d-dc6f-4f5f-9be2-4e79a7304c46 is in state SUCCESS 2026-04-05 00:58:01.553708 | orchestrator | 2026-04-05 00:58:01.553827 | orchestrator | 2026-04-05 00:58:01.553933 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-05 00:58:01.553949 | orchestrator | 2026-04-05 00:58:01.553983 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 00:58:01.554470 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.111) 0:00:00.111 ********** 2026-04-05 00:58:01.554484 | orchestrator | ok: [localhost] => { 2026-04-05 00:58:01.554498 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-05 00:58:01.554510 | orchestrator | } 2026-04-05 00:58:01.554521 | orchestrator | 2026-04-05 00:58:01.554532 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-05 00:58:01.554543 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.043) 0:00:00.155 ********** 2026-04-05 00:58:01.554554 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-05 00:58:01.554567 | orchestrator | ...ignoring 2026-04-05 00:58:01.554578 | orchestrator | 2026-04-05 00:58:01.554589 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-05 00:58:01.554600 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:02.961) 0:00:03.116 ********** 2026-04-05 00:58:01.554611 | orchestrator | skipping: [localhost] 2026-04-05 00:58:01.554681 | orchestrator | 2026-04-05 00:58:01.554694 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-05 00:58:01.554705 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:00.058) 0:00:03.175 ********** 2026-04-05 00:58:01.554716 | orchestrator | ok: [localhost] 2026-04-05 00:58:01.554727 | orchestrator | 2026-04-05 00:58:01.554737 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:58:01.554748 | orchestrator | 2026-04-05 00:58:01.554821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:58:01.554833 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:00.201) 0:00:03.376 ********** 2026-04-05 00:58:01.554846 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:01.554857 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:01.554868 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:01.554879 | orchestrator | 2026-04-05 00:58:01.554890 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:58:01.554901 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:00.318) 0:00:03.695 ********** 2026-04-05 00:58:01.554912 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 00:58:01.554923 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 00:58:01.554934 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 00:58:01.554945 | orchestrator | 2026-04-05 00:58:01.554956 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 00:58:01.554967 | orchestrator | 2026-04-05 00:58:01.554979 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 00:58:01.555015 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:00.501) 0:00:04.196 ********** 2026-04-05 00:58:01.555026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:58:01.555038 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 00:58:01.555049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 00:58:01.555060 | orchestrator | 2026-04-05 00:58:01.555071 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:58:01.555081 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:00.402) 0:00:04.599 ********** 2026-04-05 00:58:01.555092 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:01.555104 | orchestrator | 2026-04-05 00:58:01.555162 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-05 00:58:01.555174 | orchestrator | Sunday 05 April 2026 00:56:42 +0000 (0:00:00.769) 0:00:05.369 ********** 2026-04-05 00:58:01.555258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555354 | orchestrator | 2026-04-05 00:58:01.555368 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-05 00:58:01.555380 | orchestrator | Sunday 05 April 2026 00:56:46 +0000 (0:00:03.510) 0:00:08.879 ********** 2026-04-05 00:58:01.555391 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.555403 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.555414 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:01.555424 | orchestrator | 2026-04-05 00:58:01.555435 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-05 00:58:01.555446 | orchestrator | Sunday 05 April 2026 00:56:46 +0000 (0:00:00.634) 0:00:09.514 ********** 2026-04-05 00:58:01.555459 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.555472 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.555486 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:01.555497 | orchestrator | 2026-04-05 00:58:01.555510 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-05 00:58:01.555523 | orchestrator | Sunday 05 April 2026 00:56:48 +0000 (0:00:01.432) 0:00:10.947 ********** 2026-04-05 00:58:01.555538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.555614 | orchestrator | 2026-04-05 00:58:01.555628 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-05 00:58:01.555639 | orchestrator | Sunday 05 April 2026 00:56:52 +0000 (0:00:04.098) 0:00:15.045 ********** 2026-04-05 00:58:01.555650 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.555661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.555672 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:01.555683 | orchestrator | 2026-04-05 00:58:01.555694 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-05 00:58:01.555705 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:01.119) 0:00:16.164 ********** 2026-04-05 00:58:01.555716 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:01.555727 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:01.555737 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:01.555748 | orchestrator | 2026-04-05 00:58:01.555759 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:58:01.555770 | orchestrator | Sunday 05 April 2026 00:56:57 +0000 (0:00:03.928) 0:00:20.093 ********** 2026-04-05 00:58:01.555781 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:01.555792 | orchestrator | 2026-04-05 00:58:01.555802 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 00:58:01.555813 | orchestrator | Sunday 05 April 2026 00:56:57 +0000 (0:00:00.584) 0:00:20.678 ********** 2026-04-05 00:58:01.555842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.555862 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.555874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.555886 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.555910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.555931 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.555942 | orchestrator | 2026-04-05 00:58:01.555953 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 00:58:01.555964 | orchestrator | Sunday 05 April 2026 00:57:01 +0000 (0:00:03.249) 0:00:23.927 ********** 2026-04-05 00:58:01.555975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.555988 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556064 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556075 | orchestrator | 2026-04-05 00:58:01.556086 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 00:58:01.556097 | orchestrator | Sunday 05 April 2026 00:57:04 +0000 (0:00:03.361) 0:00:27.289 ********** 2026-04-05 00:58:01.556187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556212 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556273 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556284 | orchestrator | 2026-04-05 00:58:01.556295 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-05 00:58:01.556312 | orchestrator | Sunday 05 April 2026 00:57:07 +0000 (0:00:02.863) 0:00:30.153 ********** 2026-04-05 00:58:01.556324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.556343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.556372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:58:01.556385 | orchestrator | 2026-04-05 00:58:01.556396 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-05 00:58:01.556407 | orchestrator | Sunday 05 April 2026 00:57:10 +0000 (0:00:02.830) 0:00:32.984 ********** 2026-04-05 00:58:01.556418 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:58:01.556429 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:01.556440 | orchestrator | } 2026-04-05 00:58:01.556451 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:58:01.556462 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:01.556473 | orchestrator | } 2026-04-05 00:58:01.556484 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:58:01.556494 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:01.556505 | orchestrator | } 2026-04-05 00:58:01.556516 | orchestrator | 2026-04-05 00:58:01.556527 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:58:01.556538 | orchestrator | Sunday 05 April 2026 00:57:10 +0000 (0:00:00.375) 0:00:33.359 ********** 2026-04-05 00:58:01.556555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556573 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556601 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.556628 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556638 | orchestrator | 2026-04-05 00:58:01.556652 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-05 00:58:01.556662 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:02.742) 0:00:36.101 ********** 2026-04-05 00:58:01.556672 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556681 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556691 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556701 | orchestrator | 2026-04-05 00:58:01.556710 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-05 00:58:01.556720 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:00.533) 0:00:36.635 ********** 2026-04-05 00:58:01.556730 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556739 | orchestrator | 2026-04-05 00:58:01.556754 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-05 00:58:01.556764 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:00.118) 0:00:36.753 ********** 2026-04-05 00:58:01.556773 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556793 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556802 | orchestrator | 2026-04-05 00:58:01.556812 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-05 00:58:01.556822 | orchestrator | Sunday 05 April 2026 00:57:14 +0000 (0:00:00.306) 0:00:37.060 ********** 2026-04-05 00:58:01.556831 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556850 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556860 | orchestrator | 2026-04-05 00:58:01.556870 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-05 00:58:01.556879 | orchestrator | Sunday 05 April 2026 00:57:14 +0000 (0:00:00.326) 0:00:37.387 ********** 2026-04-05 00:58:01.556889 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556899 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556908 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556918 | orchestrator | 2026-04-05 00:58:01.556927 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-05 00:58:01.556937 | orchestrator | Sunday 05 April 2026 00:57:14 +0000 (0:00:00.317) 0:00:37.704 ********** 2026-04-05 00:58:01.556947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.556956 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.556966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.556975 | orchestrator | 2026-04-05 00:58:01.556985 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-05 00:58:01.556994 | orchestrator | Sunday 05 April 2026 00:57:15 +0000 (0:00:00.590) 0:00:38.295 ********** 2026-04-05 00:58:01.557004 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557014 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557023 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557033 | orchestrator | 2026-04-05 00:58:01.557043 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-05 00:58:01.557052 | orchestrator | Sunday 05 April 2026 00:57:15 +0000 (0:00:00.340) 0:00:38.636 ********** 2026-04-05 00:58:01.557062 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557072 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557081 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557091 | orchestrator | 2026-04-05 00:58:01.557100 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-05 00:58:01.557139 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:00.323) 0:00:38.960 ********** 2026-04-05 00:58:01.557150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:01.557160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:01.557170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:01.557179 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557189 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 00:58:01.557199 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 00:58:01.557208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 00:58:01.557218 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557228 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 00:58:01.557237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 00:58:01.557247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 00:58:01.557257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557266 | orchestrator | 2026-04-05 00:58:01.557276 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-05 00:58:01.557286 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:00.340) 0:00:39.300 ********** 2026-04-05 00:58:01.557295 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557305 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557324 | orchestrator | 2026-04-05 00:58:01.557334 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-05 00:58:01.557344 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:00.513) 0:00:39.813 ********** 2026-04-05 00:58:01.557353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557363 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557378 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557394 | orchestrator | 2026-04-05 00:58:01.557411 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-05 00:58:01.557426 | orchestrator | Sunday 05 April 2026 00:57:17 +0000 (0:00:00.317) 0:00:40.131 ********** 2026-04-05 00:58:01.557443 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557458 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557488 | orchestrator | 2026-04-05 00:58:01.557502 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-05 00:58:01.557518 | orchestrator | Sunday 05 April 2026 00:57:17 +0000 (0:00:00.321) 0:00:40.453 ********** 2026-04-05 00:58:01.557532 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557563 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557578 | orchestrator | 2026-04-05 00:58:01.557602 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-05 00:58:01.557619 | orchestrator | Sunday 05 April 2026 00:57:17 +0000 (0:00:00.311) 0:00:40.764 ********** 2026-04-05 00:58:01.557634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557651 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557667 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557684 | orchestrator | 2026-04-05 00:58:01.557699 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-05 00:58:01.557716 | orchestrator | Sunday 05 April 2026 00:57:18 +0000 (0:00:00.526) 0:00:41.291 ********** 2026-04-05 00:58:01.557728 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557738 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557756 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557766 | orchestrator | 2026-04-05 00:58:01.557776 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-05 00:58:01.557785 | orchestrator | Sunday 05 April 2026 00:57:18 +0000 (0:00:00.320) 0:00:41.612 ********** 2026-04-05 00:58:01.557804 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557814 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557823 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557833 | orchestrator | 2026-04-05 00:58:01.557842 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-05 00:58:01.557852 | orchestrator | Sunday 05 April 2026 00:57:19 +0000 (0:00:00.321) 0:00:41.933 ********** 2026-04-05 00:58:01.557862 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557871 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.557890 | orchestrator | 2026-04-05 00:58:01.557900 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-05 00:58:01.557910 | orchestrator | Sunday 05 April 2026 00:57:19 +0000 (0:00:00.309) 0:00:42.243 ********** 2026-04-05 00:58:01.557921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.557932 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.557956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.557978 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.557988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.557999 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558009 | orchestrator | 2026-04-05 00:58:01.558054 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-05 00:58:01.558067 | orchestrator | Sunday 05 April 2026 00:57:21 +0000 (0:00:02.513) 0:00:44.756 ********** 2026-04-05 00:58:01.558076 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558096 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558105 | orchestrator | 2026-04-05 00:58:01.558147 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-05 00:58:01.558157 | orchestrator | Sunday 05 April 2026 00:57:22 +0000 (0:00:00.536) 0:00:45.293 ********** 2026-04-05 00:58:01.558183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.558203 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.558224 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:58:01.558263 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558273 | orchestrator | 2026-04-05 00:58:01.558282 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-05 00:58:01.558292 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:02.264) 0:00:47.557 ********** 2026-04-05 00:58:01.558302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558311 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558321 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558331 | orchestrator | 2026-04-05 00:58:01.558340 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-05 00:58:01.558350 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:00.338) 0:00:47.896 ********** 2026-04-05 00:58:01.558360 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558389 | orchestrator | 2026-04-05 00:58:01.558400 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-05 00:58:01.558416 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:00.310) 0:00:48.207 ********** 2026-04-05 00:58:01.558431 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558459 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558491 | orchestrator | 2026-04-05 00:58:01.558507 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-05 00:58:01.558522 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:00.542) 0:00:48.749 ********** 2026-04-05 00:58:01.558535 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558550 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558565 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558580 | orchestrator | 2026-04-05 00:58:01.558614 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 00:58:01.558645 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:00.525) 0:00:49.274 ********** 2026-04-05 00:58:01.558662 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.558678 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.558695 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.558712 | orchestrator | 2026-04-05 00:58:01.558728 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-05 00:58:01.558745 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:00.310) 0:00:49.584 ********** 2026-04-05 00:58:01.558762 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:01.558779 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:01.558796 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:01.558812 | orchestrator | 2026-04-05 00:58:01.558827 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-05 00:58:01.558838 | orchestrator | Sunday 05 April 2026 00:57:27 +0000 (0:00:01.121) 0:00:50.706 ********** 2026-04-05 00:58:01.558848 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:01.558858 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:01.558867 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:01.558887 | orchestrator | 2026-04-05 00:58:01.558896 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-05 00:58:01.558906 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:00.363) 0:00:51.070 ********** 2026-04-05 00:58:01.558916 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:01.558925 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:01.558935 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:01.558944 | orchestrator | 2026-04-05 00:58:01.558954 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-05 00:58:01.558964 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:00.349) 0:00:51.419 ********** 2026-04-05 00:58:01.558975 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-05 00:58:01.558985 | orchestrator | ...ignoring 2026-04-05 00:58:01.558995 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-05 00:58:01.559005 | orchestrator | ...ignoring 2026-04-05 00:58:01.559015 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-05 00:58:01.559025 | orchestrator | ...ignoring 2026-04-05 00:58:01.559034 | orchestrator | 2026-04-05 00:58:01.559044 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-05 00:58:01.559053 | orchestrator | Sunday 05 April 2026 00:57:39 +0000 (0:00:10.816) 0:01:02.236 ********** 2026-04-05 00:58:01.559063 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:01.559073 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:01.559082 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:01.559092 | orchestrator | 2026-04-05 00:58:01.559101 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-05 00:58:01.559173 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:00.596) 0:01:02.832 ********** 2026-04-05 00:58:01.559185 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.559195 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559214 | orchestrator | 2026-04-05 00:58:01.559231 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-05 00:58:01.559241 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:00.370) 0:01:03.203 ********** 2026-04-05 00:58:01.559250 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.559260 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559269 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559279 | orchestrator | 2026-04-05 00:58:01.559288 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-05 00:58:01.559298 | orchestrator | Sunday 05 April 2026 00:57:40 +0000 (0:00:00.370) 0:01:03.573 ********** 2026-04-05 00:58:01.559307 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.559325 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559336 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559346 | orchestrator | 2026-04-05 00:58:01.559355 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-05 00:58:01.559365 | orchestrator | Sunday 05 April 2026 00:57:41 +0000 (0:00:00.345) 0:01:03.919 ********** 2026-04-05 00:58:01.559374 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:01.559384 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:01.559393 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:01.559403 | orchestrator | 2026-04-05 00:58:01.559413 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-05 00:58:01.559422 | orchestrator | Sunday 05 April 2026 00:57:41 +0000 (0:00:00.325) 0:01:04.245 ********** 2026-04-05 00:58:01.559433 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:01.559442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559452 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559469 | orchestrator | 2026-04-05 00:58:01.559479 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:58:01.559488 | orchestrator | Sunday 05 April 2026 00:57:41 +0000 (0:00:00.549) 0:01:04.794 ********** 2026-04-05 00:58:01.559498 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559508 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559517 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-05 00:58:01.559527 | orchestrator | 2026-04-05 00:58:01.559536 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-05 00:58:01.559546 | orchestrator | Sunday 05 April 2026 00:57:42 +0000 (0:00:00.424) 0:01:05.219 ********** 2026-04-05 00:58:01.559558 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_zh_kdmio/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_zh_kdmio/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_zh_kdmio/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:58:01.559570 | orchestrator | 2026-04-05 00:58:01.559580 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:58:01.559589 | orchestrator | Sunday 05 April 2026 00:57:46 +0000 (0:00:04.004) 0:01:09.223 ********** 2026-04-05 00:58:01.559599 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559618 | orchestrator | 2026-04-05 00:58:01.559632 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-05 00:58:01.559642 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.625) 0:01:09.849 ********** 2026-04-05 00:58:01.559652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:01.559661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:01.559671 | orchestrator | 2026-04-05 00:58:01.559680 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-05 00:58:01.559690 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.226) 0:01:10.076 ********** 2026-04-05 00:58:01.559699 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:01.559712 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:01.559724 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 00:58:01.559733 | orchestrator | 2026-04-05 00:58:01.559740 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 00:58:01.559748 | orchestrator | skipping: no hosts matched 2026-04-05 00:58:01.559756 | orchestrator | 2026-04-05 00:58:01.559764 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 00:58:01.559772 | orchestrator | 2026-04-05 00:58:01.559779 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 00:58:01.559787 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.262) 0:01:10.339 ********** 2026-04-05 00:58:01.559796 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_r3wngey6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_r3wngey6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_r3wngey6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_r3wngey6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 00:58:01.559805 | orchestrator | 2026-04-05 00:58:01.559813 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:58:01.559821 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 00:58:01.559829 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-05 00:58:01.559839 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-05 00:58:01.559852 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-05 00:58:01.559871 | orchestrator | 2026-04-05 00:58:01.559879 | orchestrator | 2026-04-05 00:58:01.559887 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:58:01.559894 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:10.730) 0:01:21.069 ********** 2026-04-05 00:58:01.559902 | orchestrator | =============================================================================== 2026-04-05 00:58:01.559910 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2026-04-05 00:58:01.559918 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.73s 2026-04-05 00:58:01.559929 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.10s 2026-04-05 00:58:01.559937 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 4.00s 2026-04-05 00:58:01.559945 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.93s 2026-04-05 00:58:01.559953 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.51s 2026-04-05 00:58:01.559961 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.36s 2026-04-05 00:58:01.559968 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.25s 2026-04-05 00:58:01.559976 | orchestrator | Check MariaDB service --------------------------------------------------- 2.96s 2026-04-05 00:58:01.559984 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.86s 2026-04-05 00:58:01.559992 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.83s 2026-04-05 00:58:01.559999 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.74s 2026-04-05 00:58:01.560007 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.51s 2026-04-05 00:58:01.560015 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.26s 2026-04-05 00:58:01.560023 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.43s 2026-04-05 00:58:01.560030 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 1.12s 2026-04-05 00:58:01.560038 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.12s 2026-04-05 00:58:01.560046 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.77s 2026-04-05 00:58:01.560054 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.64s 2026-04-05 00:58:01.560061 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.63s 2026-04-05 00:58:01.560069 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:01.560077 | orchestrator | 2026-04-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:04.586315 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:04.588164 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:04.589463 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:04.589770 | orchestrator | 2026-04-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:07.630449 | orchestrator | 2026-04-05 00:58:07 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:07.633351 | orchestrator | 2026-04-05 00:58:07 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:07.635536 | orchestrator | 2026-04-05 00:58:07 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:07.635609 | orchestrator | 2026-04-05 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:10.677684 | orchestrator | 2026-04-05 00:58:10 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:10.678892 | orchestrator | 2026-04-05 00:58:10 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:10.681089 | orchestrator | 2026-04-05 00:58:10 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:10.681166 | orchestrator | 2026-04-05 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:13.722191 | orchestrator | 2026-04-05 00:58:13 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:13.723092 | orchestrator | 2026-04-05 00:58:13 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:13.724666 | orchestrator | 2026-04-05 00:58:13 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:13.724766 | orchestrator | 2026-04-05 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:16.770686 | orchestrator | 2026-04-05 00:58:16 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:16.773425 | orchestrator | 2026-04-05 00:58:16 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:16.775923 | orchestrator | 2026-04-05 00:58:16 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:16.776002 | orchestrator | 2026-04-05 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:19.829299 | orchestrator | 2026-04-05 00:58:19 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:19.831282 | orchestrator | 2026-04-05 00:58:19 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:19.832227 | orchestrator | 2026-04-05 00:58:19 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:19.832254 | orchestrator | 2026-04-05 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:22.870678 | orchestrator | 2026-04-05 00:58:22 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:22.872916 | orchestrator | 2026-04-05 00:58:22 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:22.874715 | orchestrator | 2026-04-05 00:58:22 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:22.874808 | orchestrator | 2026-04-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:25.907824 | orchestrator | 2026-04-05 00:58:25 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:25.908616 | orchestrator | 2026-04-05 00:58:25 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:25.911003 | orchestrator | 2026-04-05 00:58:25 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:25.911067 | orchestrator | 2026-04-05 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:28.941960 | orchestrator | 2026-04-05 00:58:28 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:28.942572 | orchestrator | 2026-04-05 00:58:28 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:28.943765 | orchestrator | 2026-04-05 00:58:28 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:28.944039 | orchestrator | 2026-04-05 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:31.980262 | orchestrator | 2026-04-05 00:58:31 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:31.982353 | orchestrator | 2026-04-05 00:58:31 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:31.985533 | orchestrator | 2026-04-05 00:58:31 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:31.988554 | orchestrator | 2026-04-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:35.033788 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state STARTED 2026-04-05 00:58:35.035508 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:35.037105 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:35.037312 | orchestrator | 2026-04-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:38.085569 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task aac56a80-6929-4585-88d5-2fd3f166c3a6 is in state SUCCESS 2026-04-05 00:58:38.086611 | orchestrator | 2026-04-05 00:58:38.086659 | orchestrator | 2026-04-05 00:58:38.086680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:58:38.086695 | orchestrator | 2026-04-05 00:58:38.086709 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:58:38.086722 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.333) 0:00:00.333 ********** 2026-04-05 00:58:38.086735 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.086748 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.086762 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.086775 | orchestrator | 2026-04-05 00:58:38.086789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:58:38.086802 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.313) 0:00:00.647 ********** 2026-04-05 00:58:38.086816 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-05 00:58:38.086831 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-05 00:58:38.086845 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-05 00:58:38.086859 | orchestrator | 2026-04-05 00:58:38.086868 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-05 00:58:38.086876 | orchestrator | 2026-04-05 00:58:38.086885 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 00:58:38.086893 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.326) 0:00:00.974 ********** 2026-04-05 00:58:38.086915 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:38.086925 | orchestrator | 2026-04-05 00:58:38.086932 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-05 00:58:38.086940 | orchestrator | Sunday 05 April 2026 00:58:03 +0000 (0:00:00.646) 0:00:01.620 ********** 2026-04-05 00:58:38.086954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.087164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.087187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.087205 | orchestrator | 2026-04-05 00:58:38.087216 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-05 00:58:38.087226 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:01.879) 0:00:03.500 ********** 2026-04-05 00:58:38.087236 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.087246 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.087472 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.087482 | orchestrator | 2026-04-05 00:58:38.087497 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 00:58:38.087507 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:00.325) 0:00:03.826 ********** 2026-04-05 00:58:38.087516 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 00:58:38.087526 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 00:58:38.087535 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 00:58:38.087544 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 00:58:38.087554 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 00:58:38.087564 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 00:58:38.087573 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-05 00:58:38.087581 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 00:58:38.087589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 00:58:38.087598 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 00:58:38.087606 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 00:58:38.087614 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 00:58:38.087622 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 00:58:38.087630 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 00:58:38.087638 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 00:58:38.087654 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-05 00:58:38.087662 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 00:58:38.087670 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 00:58:38.087678 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 00:58:38.087686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 00:58:38.087694 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 00:58:38.087702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 00:58:38.087710 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-05 00:58:38.087717 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 00:58:38.087726 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-05 00:58:38.087736 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-05 00:58:38.087744 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-05 00:58:38.087752 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-05 00:58:38.087832 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-05 00:58:38.087850 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-05 00:58:38.087859 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-05 00:58:38.087866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-05 00:58:38.087874 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-05 00:58:38.087883 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-05 00:58:38.087891 | orchestrator | 2026-04-05 00:58:38.087899 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.087908 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.738) 0:00:04.564 ********** 2026-04-05 00:58:38.087916 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.087924 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.087938 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.087946 | orchestrator | 2026-04-05 00:58:38.087954 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.087962 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.518) 0:00:05.082 ********** 2026-04-05 00:58:38.087970 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.087978 | orchestrator | 2026-04-05 00:58:38.087986 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.087994 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.121) 0:00:05.203 ********** 2026-04-05 00:58:38.088002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088023 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088031 | orchestrator | 2026-04-05 00:58:38.088039 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088047 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.303) 0:00:05.506 ********** 2026-04-05 00:58:38.088055 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088063 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088071 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088117 | orchestrator | 2026-04-05 00:58:38.088126 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088134 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.314) 0:00:05.821 ********** 2026-04-05 00:58:38.088141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088149 | orchestrator | 2026-04-05 00:58:38.088161 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088169 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.130) 0:00:05.951 ********** 2026-04-05 00:58:38.088177 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088185 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088193 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088201 | orchestrator | 2026-04-05 00:58:38.088209 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088217 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.483) 0:00:06.435 ********** 2026-04-05 00:58:38.088225 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088232 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088240 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088248 | orchestrator | 2026-04-05 00:58:38.088256 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088264 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.357) 0:00:06.793 ********** 2026-04-05 00:58:38.088271 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088279 | orchestrator | 2026-04-05 00:58:38.088287 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088295 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.178) 0:00:06.971 ********** 2026-04-05 00:58:38.088303 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088311 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088318 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088326 | orchestrator | 2026-04-05 00:58:38.088334 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088342 | orchestrator | Sunday 05 April 2026 00:58:09 +0000 (0:00:00.374) 0:00:07.346 ********** 2026-04-05 00:58:38.088350 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088357 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088365 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088373 | orchestrator | 2026-04-05 00:58:38.088381 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088389 | orchestrator | Sunday 05 April 2026 00:58:09 +0000 (0:00:00.347) 0:00:07.694 ********** 2026-04-05 00:58:38.088397 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088405 | orchestrator | 2026-04-05 00:58:38.088412 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088420 | orchestrator | Sunday 05 April 2026 00:58:09 +0000 (0:00:00.125) 0:00:07.819 ********** 2026-04-05 00:58:38.088428 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088436 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088443 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088451 | orchestrator | 2026-04-05 00:58:38.088459 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088467 | orchestrator | Sunday 05 April 2026 00:58:10 +0000 (0:00:00.545) 0:00:08.364 ********** 2026-04-05 00:58:38.088475 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088483 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088496 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088504 | orchestrator | 2026-04-05 00:58:38.088512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088520 | orchestrator | Sunday 05 April 2026 00:58:10 +0000 (0:00:00.360) 0:00:08.725 ********** 2026-04-05 00:58:38.088528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088536 | orchestrator | 2026-04-05 00:58:38.088543 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088551 | orchestrator | Sunday 05 April 2026 00:58:10 +0000 (0:00:00.194) 0:00:08.920 ********** 2026-04-05 00:58:38.088559 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088567 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088575 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088583 | orchestrator | 2026-04-05 00:58:38.088590 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088598 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:00.283) 0:00:09.204 ********** 2026-04-05 00:58:38.088606 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088614 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088622 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088629 | orchestrator | 2026-04-05 00:58:38.088637 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088645 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:00.344) 0:00:09.549 ********** 2026-04-05 00:58:38.088653 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088661 | orchestrator | 2026-04-05 00:58:38.088669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088676 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:00.125) 0:00:09.674 ********** 2026-04-05 00:58:38.088689 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088697 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088705 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088713 | orchestrator | 2026-04-05 00:58:38.088721 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088729 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:00.759) 0:00:10.434 ********** 2026-04-05 00:58:38.088737 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088745 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088753 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088760 | orchestrator | 2026-04-05 00:58:38.088768 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088776 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:00.315) 0:00:10.749 ********** 2026-04-05 00:58:38.088784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088792 | orchestrator | 2026-04-05 00:58:38.088800 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088808 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:00.118) 0:00:10.868 ********** 2026-04-05 00:58:38.088816 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088824 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088832 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088840 | orchestrator | 2026-04-05 00:58:38.088848 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088860 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:00.314) 0:00:11.182 ********** 2026-04-05 00:58:38.088868 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.088876 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.088883 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.088891 | orchestrator | 2026-04-05 00:58:38.088899 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.088907 | orchestrator | Sunday 05 April 2026 00:58:13 +0000 (0:00:00.336) 0:00:11.518 ********** 2026-04-05 00:58:38.088915 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088923 | orchestrator | 2026-04-05 00:58:38.088931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.088944 | orchestrator | Sunday 05 April 2026 00:58:13 +0000 (0:00:00.398) 0:00:11.917 ********** 2026-04-05 00:58:38.088952 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.088960 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.088968 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.088975 | orchestrator | 2026-04-05 00:58:38.088983 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.088991 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.336) 0:00:12.253 ********** 2026-04-05 00:58:38.088999 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.089007 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.089015 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.089023 | orchestrator | 2026-04-05 00:58:38.089030 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.089038 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.334) 0:00:12.588 ********** 2026-04-05 00:58:38.089046 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089054 | orchestrator | 2026-04-05 00:58:38.089062 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.089069 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.147) 0:00:12.736 ********** 2026-04-05 00:58:38.089077 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089107 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.089115 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.089123 | orchestrator | 2026-04-05 00:58:38.089131 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 00:58:38.089139 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.291) 0:00:13.028 ********** 2026-04-05 00:58:38.089148 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:38.089162 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:38.089175 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:38.089189 | orchestrator | 2026-04-05 00:58:38.089203 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 00:58:38.089215 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:00.585) 0:00:13.613 ********** 2026-04-05 00:58:38.089227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089239 | orchestrator | 2026-04-05 00:58:38.089251 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 00:58:38.089265 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:00.128) 0:00:13.742 ********** 2026-04-05 00:58:38.089278 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.089304 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.089318 | orchestrator | 2026-04-05 00:58:38.089330 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-05 00:58:38.089344 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:00.323) 0:00:14.066 ********** 2026-04-05 00:58:38.089357 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:38.089370 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:38.089390 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:38.089404 | orchestrator | 2026-04-05 00:58:38.089419 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-05 00:58:38.089433 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:01.705) 0:00:15.771 ********** 2026-04-05 00:58:38.089447 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 00:58:38.089461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 00:58:38.089474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 00:58:38.089488 | orchestrator | 2026-04-05 00:58:38.089503 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-05 00:58:38.089519 | orchestrator | Sunday 05 April 2026 00:58:20 +0000 (0:00:02.659) 0:00:18.430 ********** 2026-04-05 00:58:38.089533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 00:58:38.089573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 00:58:38.089589 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 00:58:38.089605 | orchestrator | 2026-04-05 00:58:38.089621 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-05 00:58:38.089636 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:03.344) 0:00:21.775 ********** 2026-04-05 00:58:38.089652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 00:58:38.089667 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 00:58:38.089682 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 00:58:38.089698 | orchestrator | 2026-04-05 00:58:38.089714 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-05 00:58:38.089730 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:01.671) 0:00:23.446 ********** 2026-04-05 00:58:38.089746 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089761 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.089776 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.089785 | orchestrator | 2026-04-05 00:58:38.089801 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-05 00:58:38.089811 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:00.296) 0:00:23.743 ********** 2026-04-05 00:58:38.089819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.089828 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.089839 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.089853 | orchestrator | 2026-04-05 00:58:38.089873 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 00:58:38.089888 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:00.322) 0:00:24.066 ********** 2026-04-05 00:58:38.089901 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:38.089915 | orchestrator | 2026-04-05 00:58:38.089930 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-05 00:58:38.089946 | orchestrator | Sunday 05 April 2026 00:58:26 +0000 (0:00:00.805) 0:00:24.871 ********** 2026-04-05 00:58:38.089965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090166 | orchestrator | 2026-04-05 00:58:38.090175 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-05 00:58:38.090184 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:01.815) 0:00:26.687 ********** 2026-04-05 00:58:38.090200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090210 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.090227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.090258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090289 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.090298 | orchestrator | 2026-04-05 00:58:38.090307 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-05 00:58:38.090315 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:00.698) 0:00:27.385 ********** 2026-04-05 00:58:38.090337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090347 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.090357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.090394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090404 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.090413 | orchestrator | 2026-04-05 00:58:38.090422 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-05 00:58:38.090430 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:01.472) 0:00:28.858 ********** 2026-04-05 00:58:38.090444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:58:38.090501 | orchestrator | 2026-04-05 00:58:38.090510 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-05 00:58:38.090519 | orchestrator | Sunday 05 April 2026 00:58:32 +0000 (0:00:01.464) 0:00:30.322 ********** 2026-04-05 00:58:38.090528 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:58:38.090537 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:38.090546 | orchestrator | } 2026-04-05 00:58:38.090555 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:58:38.090564 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:38.090572 | orchestrator | } 2026-04-05 00:58:38.090581 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:58:38.090590 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:38.090599 | orchestrator | } 2026-04-05 00:58:38.090608 | orchestrator | 2026-04-05 00:58:38.090620 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:58:38.090629 | orchestrator | Sunday 05 April 2026 00:58:32 +0000 (0:00:00.624) 0:00:30.946 ********** 2026-04-05 00:58:38.090639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090659 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.090689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090704 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.090718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:58:38.090741 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.090755 | orchestrator | 2026-04-05 00:58:38.090768 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 00:58:38.090783 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:01.486) 0:00:32.433 ********** 2026-04-05 00:58:38.090799 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:38.090813 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:38.090826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:38.090839 | orchestrator | 2026-04-05 00:58:38.090861 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 00:58:38.090876 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.360) 0:00:32.793 ********** 2026-04-05 00:58:38.090892 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:38.090908 | orchestrator | 2026-04-05 00:58:38.090922 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-05 00:58:38.090938 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.605) 0:00:33.399 ********** 2026-04-05 00:58:38.090948 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:58:38.090957 | orchestrator | 2026-04-05 00:58:38.090966 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:58:38.090975 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-05 00:58:38.090991 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 00:58:38.091000 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 00:58:38.091009 | orchestrator | 2026-04-05 00:58:38.091018 | orchestrator | 2026-04-05 00:58:38.091026 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:58:38.091042 | orchestrator | Sunday 05 April 2026 00:58:36 +0000 (0:00:00.873) 0:00:34.273 ********** 2026-04-05 00:58:38.091051 | orchestrator | =============================================================================== 2026-04-05 00:58:38.091060 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.34s 2026-04-05 00:58:38.091068 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.66s 2026-04-05 00:58:38.091077 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.88s 2026-04-05 00:58:38.091106 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.82s 2026-04-05 00:58:38.091115 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.71s 2026-04-05 00:58:38.091124 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.67s 2026-04-05 00:58:38.091133 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.49s 2026-04-05 00:58:38.091141 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.47s 2026-04-05 00:58:38.091150 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.46s 2026-04-05 00:58:38.091158 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.87s 2026-04-05 00:58:38.091167 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-04-05 00:58:38.091176 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.76s 2026-04-05 00:58:38.091184 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-04-05 00:58:38.091340 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-04-05 00:58:38.091354 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-04-05 00:58:38.091362 | orchestrator | service-check-containers : horizon | Notify handlers to restart containers --- 0.63s 2026-04-05 00:58:38.091371 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-04-05 00:58:38.091382 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-04-05 00:58:38.091397 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-04-05 00:58:38.091412 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-04-05 00:58:38.091433 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:38.093946 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:38.093992 | orchestrator | 2026-04-05 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:41.135488 | orchestrator | 2026-04-05 00:58:41 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:41.138233 | orchestrator | 2026-04-05 00:58:41 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:41.139154 | orchestrator | 2026-04-05 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:44.196692 | orchestrator | 2026-04-05 00:58:44 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:44.198275 | orchestrator | 2026-04-05 00:58:44 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:44.198310 | orchestrator | 2026-04-05 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:47.242292 | orchestrator | 2026-04-05 00:58:47 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:47.243271 | orchestrator | 2026-04-05 00:58:47 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:47.243294 | orchestrator | 2026-04-05 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:50.305122 | orchestrator | 2026-04-05 00:58:50 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state STARTED 2026-04-05 00:58:50.306748 | orchestrator | 2026-04-05 00:58:50 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:50.307235 | orchestrator | 2026-04-05 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:53.371133 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:58:53.371488 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:58:53.373802 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 97f28831-cb65-471b-841a-946469db2d8c is in state SUCCESS 2026-04-05 00:58:53.375299 | orchestrator | 2026-04-05 00:58:53.375319 | orchestrator | 2026-04-05 00:58:53.375335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:58:53.375343 | orchestrator | 2026-04-05 00:58:53.375349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:58:53.375356 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.345) 0:00:00.345 ********** 2026-04-05 00:58:53.375363 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:53.375371 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:53.375377 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:53.375383 | orchestrator | 2026-04-05 00:58:53.375390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:58:53.375396 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.282) 0:00:00.627 ********** 2026-04-05 00:58:53.375404 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 00:58:53.375459 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 00:58:53.375464 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 00:58:53.375468 | orchestrator | 2026-04-05 00:58:53.375472 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-05 00:58:53.375476 | orchestrator | 2026-04-05 00:58:53.375480 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 00:58:53.375484 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:00.351) 0:00:00.979 ********** 2026-04-05 00:58:53.375488 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:53.375493 | orchestrator | 2026-04-05 00:58:53.375497 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-05 00:58:53.375501 | orchestrator | Sunday 05 April 2026 00:58:03 +0000 (0:00:00.716) 0:00:01.695 ********** 2026-04-05 00:58:53.375508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375591 | orchestrator | 2026-04-05 00:58:53.375675 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-05 00:58:53.375684 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:02.543) 0:00:04.239 ********** 2026-04-05 00:58:53.375690 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.375697 | orchestrator | 2026-04-05 00:58:53.375701 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-05 00:58:53.375705 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.145) 0:00:04.384 ********** 2026-04-05 00:58:53.375709 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.375713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.375717 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.375857 | orchestrator | 2026-04-05 00:58:53.375862 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-05 00:58:53.375866 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.291) 0:00:04.675 ********** 2026-04-05 00:58:53.375870 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:58:53.375874 | orchestrator | 2026-04-05 00:58:53.375878 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 00:58:53.375881 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:01.027) 0:00:05.702 ********** 2026-04-05 00:58:53.375885 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:53.375889 | orchestrator | 2026-04-05 00:58:53.375893 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-05 00:58:53.375897 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.708) 0:00:06.411 ********** 2026-04-05 00:58:53.375902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.375932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.375964 | orchestrator | 2026-04-05 00:58:53.375976 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-05 00:58:53.375981 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:03.372) 0:00:09.783 ********** 2026-04-05 00:58:53.375985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.375993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.375998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376007 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376029 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376050 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376054 | orchestrator | 2026-04-05 00:58:53.376059 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-05 00:58:53.376063 | orchestrator | Sunday 05 April 2026 00:58:12 +0000 (0:00:00.677) 0:00:10.460 ********** 2026-04-05 00:58:53.376105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376122 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376140 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376168 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376173 | orchestrator | 2026-04-05 00:58:53.376177 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-05 00:58:53.376181 | orchestrator | Sunday 05 April 2026 00:58:13 +0000 (0:00:00.972) 0:00:11.433 ********** 2026-04-05 00:58:53.376185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376292 | orchestrator | 2026-04-05 00:58:53.376296 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-05 00:58:53.376300 | orchestrator | Sunday 05 April 2026 00:58:16 +0000 (0:00:03.300) 0:00:14.734 ********** 2026-04-05 00:58:53.376305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376369 | orchestrator | 2026-04-05 00:58:53.376373 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-05 00:58:53.376377 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:06.513) 0:00:21.248 ********** 2026-04-05 00:58:53.376382 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:53.376386 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:53.376390 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:53.376394 | orchestrator | 2026-04-05 00:58:53.376398 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-05 00:58:53.376402 | orchestrator | Sunday 05 April 2026 00:58:24 +0000 (0:00:01.593) 0:00:22.841 ********** 2026-04-05 00:58:53.376406 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376415 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376419 | orchestrator | 2026-04-05 00:58:53.376423 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-05 00:58:53.376427 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:00.805) 0:00:23.647 ********** 2026-04-05 00:58:53.376431 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376435 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376444 | orchestrator | 2026-04-05 00:58:53.376448 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-05 00:58:53.376452 | orchestrator | Sunday 05 April 2026 00:58:26 +0000 (0:00:00.559) 0:00:24.206 ********** 2026-04-05 00:58:53.376456 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376460 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376465 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376469 | orchestrator | 2026-04-05 00:58:53.376473 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-05 00:58:53.376477 | orchestrator | Sunday 05 April 2026 00:58:26 +0000 (0:00:00.299) 0:00:24.506 ********** 2026-04-05 00:58:53.376482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376522 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.376551 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376555 | orchestrator | 2026-04-05 00:58:53.376559 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 00:58:53.376563 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:00.772) 0:00:25.278 ********** 2026-04-05 00:58:53.376567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376572 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376576 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376580 | orchestrator | 2026-04-05 00:58:53.376584 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-05 00:58:53.376588 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:00.388) 0:00:25.666 ********** 2026-04-05 00:58:53.376592 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 00:58:53.376597 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 00:58:53.376601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 00:58:53.376605 | orchestrator | 2026-04-05 00:58:53.376609 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-05 00:58:53.376613 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:01.950) 0:00:27.616 ********** 2026-04-05 00:58:53.376618 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:58:53.376622 | orchestrator | 2026-04-05 00:58:53.376626 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-05 00:58:53.376630 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:01.253) 0:00:28.870 ********** 2026-04-05 00:58:53.376634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.376638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.376642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.376646 | orchestrator | 2026-04-05 00:58:53.376651 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-05 00:58:53.376655 | orchestrator | Sunday 05 April 2026 00:58:31 +0000 (0:00:00.990) 0:00:29.861 ********** 2026-04-05 00:58:53.376662 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:58:53.376666 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:58:53.376670 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:58:53.376675 | orchestrator | 2026-04-05 00:58:53.376679 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-05 00:58:53.376683 | orchestrator | Sunday 05 April 2026 00:58:33 +0000 (0:00:02.006) 0:00:31.867 ********** 2026-04-05 00:58:53.376687 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:53.376692 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:53.376696 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:53.376700 | orchestrator | 2026-04-05 00:58:53.376704 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-05 00:58:53.376708 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.299) 0:00:32.167 ********** 2026-04-05 00:58:53.376712 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 00:58:53.376716 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 00:58:53.376721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 00:58:53.376725 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 00:58:53.376729 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 00:58:53.376733 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 00:58:53.376737 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 00:58:53.376741 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 00:58:53.376746 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 00:58:53.376750 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 00:58:53.376754 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 00:58:53.376758 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 00:58:53.376762 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 00:58:53.376766 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 00:58:53.376772 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 00:58:53.376779 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 00:58:53.376783 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 00:58:53.376788 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 00:58:53.376792 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 00:58:53.376796 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 00:58:53.376800 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 00:58:53.376804 | orchestrator | 2026-04-05 00:58:53.376808 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-05 00:58:53.376812 | orchestrator | Sunday 05 April 2026 00:58:43 +0000 (0:00:09.402) 0:00:41.569 ********** 2026-04-05 00:58:53.376817 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 00:58:53.376821 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 00:58:53.376828 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 00:58:53.376832 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 00:58:53.376836 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 00:58:53.376840 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 00:58:53.376845 | orchestrator | 2026-04-05 00:58:53.376849 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-05 00:58:53.376853 | orchestrator | Sunday 05 April 2026 00:58:46 +0000 (0:00:02.577) 0:00:44.146 ********** 2026-04-05 00:58:53.376857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 00:58:53.376880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 00:58:53.376914 | orchestrator | 2026-04-05 00:58:53.376919 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-05 00:58:53.376927 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:02.327) 0:00:46.474 ********** 2026-04-05 00:58:53.376931 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 00:58:53.376936 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:53.376941 | orchestrator | } 2026-04-05 00:58:53.376946 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 00:58:53.376951 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:53.376956 | orchestrator | } 2026-04-05 00:58:53.376960 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 00:58:53.376965 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 00:58:53.376969 | orchestrator | } 2026-04-05 00:58:53.376974 | orchestrator | 2026-04-05 00:58:53.376979 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 00:58:53.376983 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:00.317) 0:00:46.792 ********** 2026-04-05 00:58:53.376989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.376994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.376999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.377004 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.377016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.377025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.377030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.377035 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.377041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 00:58:53.377046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:58:53.377051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:58:53.377060 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.377075 | orchestrator | 2026-04-05 00:58:53.377080 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 00:58:53.377085 | orchestrator | Sunday 05 April 2026 00:58:49 +0000 (0:00:01.058) 0:00:47.850 ********** 2026-04-05 00:58:53.377096 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:53.377101 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:53.377106 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:53.377111 | orchestrator | 2026-04-05 00:58:53.377116 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-05 00:58:53.377121 | orchestrator | Sunday 05 April 2026 00:58:50 +0000 (0:00:00.343) 0:00:48.194 ********** 2026-04-05 00:58:53.377125 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:58:53.377130 | orchestrator | 2026-04-05 00:58:53.377135 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:58:53.377140 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-05 00:58:53.377146 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 00:58:53.377152 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 00:58:53.377157 | orchestrator | 2026-04-05 00:58:53.377161 | orchestrator | 2026-04-05 00:58:53.377166 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:58:53.377171 | orchestrator | Sunday 05 April 2026 00:58:50 +0000 (0:00:00.799) 0:00:48.993 ********** 2026-04-05 00:58:53.377176 | orchestrator | =============================================================================== 2026-04-05 00:58:53.377181 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.40s 2026-04-05 00:58:53.377186 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.51s 2026-04-05 00:58:53.377191 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.37s 2026-04-05 00:58:53.377195 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.30s 2026-04-05 00:58:53.377200 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.58s 2026-04-05 00:58:53.377205 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.54s 2026-04-05 00:58:53.377210 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.33s 2026-04-05 00:58:53.377214 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 2.01s 2026-04-05 00:58:53.377219 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.95s 2026-04-05 00:58:53.377224 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.59s 2026-04-05 00:58:53.377229 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.25s 2026-04-05 00:58:53.377233 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.06s 2026-04-05 00:58:53.377283 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.03s 2026-04-05 00:58:53.377289 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.99s 2026-04-05 00:58:53.377294 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.97s 2026-04-05 00:58:53.377299 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.81s 2026-04-05 00:58:53.377303 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.80s 2026-04-05 00:58:53.377307 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.77s 2026-04-05 00:58:53.377315 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.72s 2026-04-05 00:58:53.377319 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.71s 2026-04-05 00:58:53.377324 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:58:53.377330 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:53.378264 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:58:53.378282 | orchestrator | 2026-04-05 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:56.416895 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:58:56.417834 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:58:56.419040 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:58:56.419929 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:56.421034 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:58:56.421099 | orchestrator | 2026-04-05 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:59.473226 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:58:59.474304 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:58:59.474645 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:58:59.475917 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:58:59.477008 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:58:59.477440 | orchestrator | 2026-04-05 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:02.528240 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:02.529717 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:02.534333 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:02.536917 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:02.539532 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:02.539969 | orchestrator | 2026-04-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:05.588015 | orchestrator | 2026-04-05 00:59:05 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:05.590540 | orchestrator | 2026-04-05 00:59:05 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:05.592945 | orchestrator | 2026-04-05 00:59:05 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:05.594998 | orchestrator | 2026-04-05 00:59:05 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:05.597395 | orchestrator | 2026-04-05 00:59:05 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:05.597467 | orchestrator | 2026-04-05 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:08.650468 | orchestrator | 2026-04-05 00:59:08 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:08.654581 | orchestrator | 2026-04-05 00:59:08 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:08.658235 | orchestrator | 2026-04-05 00:59:08 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:08.661882 | orchestrator | 2026-04-05 00:59:08 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:08.666555 | orchestrator | 2026-04-05 00:59:08 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:08.666694 | orchestrator | 2026-04-05 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:11.726421 | orchestrator | 2026-04-05 00:59:11 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:11.976095 | orchestrator | 2026-04-05 00:59:11 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:11.976166 | orchestrator | 2026-04-05 00:59:11 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:11.976183 | orchestrator | 2026-04-05 00:59:11 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:11.976204 | orchestrator | 2026-04-05 00:59:11 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:11.976223 | orchestrator | 2026-04-05 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:14.784237 | orchestrator | 2026-04-05 00:59:14 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:14.785966 | orchestrator | 2026-04-05 00:59:14 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:14.789862 | orchestrator | 2026-04-05 00:59:14 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:14.792699 | orchestrator | 2026-04-05 00:59:14 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:14.793882 | orchestrator | 2026-04-05 00:59:14 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:14.793920 | orchestrator | 2026-04-05 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:17.851511 | orchestrator | 2026-04-05 00:59:17 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:17.853900 | orchestrator | 2026-04-05 00:59:17 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:17.855846 | orchestrator | 2026-04-05 00:59:17 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:17.858377 | orchestrator | 2026-04-05 00:59:17 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:17.860828 | orchestrator | 2026-04-05 00:59:17 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:17.860885 | orchestrator | 2026-04-05 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:20.908292 | orchestrator | 2026-04-05 00:59:20 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:20.910425 | orchestrator | 2026-04-05 00:59:20 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:20.913229 | orchestrator | 2026-04-05 00:59:20 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:20.915015 | orchestrator | 2026-04-05 00:59:20 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:20.917930 | orchestrator | 2026-04-05 00:59:20 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:20.917993 | orchestrator | 2026-04-05 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:23.971388 | orchestrator | 2026-04-05 00:59:23 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:23.972238 | orchestrator | 2026-04-05 00:59:23 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:23.973449 | orchestrator | 2026-04-05 00:59:23 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:23.975394 | orchestrator | 2026-04-05 00:59:23 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:23.976019 | orchestrator | 2026-04-05 00:59:23 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:23.976075 | orchestrator | 2026-04-05 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:27.046481 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:27.047933 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:27.049955 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:27.052384 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:27.054111 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:27.054160 | orchestrator | 2026-04-05 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:30.110863 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:30.112528 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:30.113879 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:30.115657 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:30.117915 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:30.117983 | orchestrator | 2026-04-05 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:33.160741 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:33.160851 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:33.162184 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:33.163293 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:33.166324 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:33.166790 | orchestrator | 2026-04-05 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:36.215348 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:36.217552 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:36.218477 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state STARTED 2026-04-05 00:59:36.220105 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:36.221730 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:36.221759 | orchestrator | 2026-04-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:39.264598 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:39.265820 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:39.267596 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 7ef76344-467e-4898-8d15-b1afc4a09822 is in state SUCCESS 2026-04-05 00:59:39.269157 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:39.271324 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:39.271504 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:39.271526 | orchestrator | 2026-04-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:42.319359 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:42.321047 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:42.322732 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:42.324663 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:42.326572 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:42.326602 | orchestrator | 2026-04-05 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:45.377258 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:45.378430 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:45.379895 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:45.381467 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:45.383182 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:45.383221 | orchestrator | 2026-04-05 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:48.422904 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state STARTED 2026-04-05 00:59:48.427047 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:48.427599 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:48.429330 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:48.430837 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state STARTED 2026-04-05 00:59:48.431081 | orchestrator | 2026-04-05 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:51.483720 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task c888e91b-1fa5-4894-9d25-14d70717fdec is in state SUCCESS 2026-04-05 00:59:51.486258 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:51.489250 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 00:59:51.493142 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:51.495937 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:51.496552 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task 12b91070-174d-40ae-b58e-04e4b1bdd09b is in state SUCCESS 2026-04-05 00:59:51.497372 | orchestrator | 2026-04-05 00:59:51.497403 | orchestrator | 2026-04-05 00:59:51.497414 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-05 00:59:51.497424 | orchestrator | 2026-04-05 00:59:51.497433 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-05 00:59:51.497442 | orchestrator | Sunday 05 April 2026 00:58:54 +0000 (0:00:00.115) 0:00:00.115 ********** 2026-04-05 00:59:51.497450 | orchestrator | changed: [localhost] 2026-04-05 00:59:51.497461 | orchestrator | 2026-04-05 00:59:51.497471 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-05 00:59:51.497480 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:01.179) 0:00:01.294 ********** 2026-04-05 00:59:51.497489 | orchestrator | changed: [localhost] 2026-04-05 00:59:51.497498 | orchestrator | 2026-04-05 00:59:51.497506 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-05 00:59:51.497515 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:36.342) 0:00:37.637 ********** 2026-04-05 00:59:51.497524 | orchestrator | changed: [localhost] 2026-04-05 00:59:51.497532 | orchestrator | 2026-04-05 00:59:51.497541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:51.497550 | orchestrator | 2026-04-05 00:59:51.497559 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:51.497569 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:04.948) 0:00:42.586 ********** 2026-04-05 00:59:51.497577 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:51.497584 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:51.497592 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:51.497599 | orchestrator | 2026-04-05 00:59:51.497606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:51.497614 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:00.311) 0:00:42.897 ********** 2026-04-05 00:59:51.497621 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-05 00:59:51.497630 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-05 00:59:51.497638 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-05 00:59:51.497647 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-05 00:59:51.497655 | orchestrator | 2026-04-05 00:59:51.497662 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-05 00:59:51.497670 | orchestrator | skipping: no hosts matched 2026-04-05 00:59:51.497678 | orchestrator | 2026-04-05 00:59:51.497686 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:51.497694 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.497705 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.497738 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.497746 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.497754 | orchestrator | 2026-04-05 00:59:51.497761 | orchestrator | 2026-04-05 00:59:51.497769 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:51.497776 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:00.425) 0:00:43.323 ********** 2026-04-05 00:59:51.497783 | orchestrator | =============================================================================== 2026-04-05 00:59:51.497790 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 36.34s 2026-04-05 00:59:51.497798 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.95s 2026-04-05 00:59:51.497806 | orchestrator | Ensure the destination directory exists --------------------------------- 1.18s 2026-04-05 00:59:51.497814 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-05 00:59:51.497823 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-05 00:59:51.497830 | orchestrator | 2026-04-05 00:59:51.497838 | orchestrator | 2026-04-05 00:59:51.497845 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:51.497853 | orchestrator | 2026-04-05 00:59:51.497860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:51.497868 | orchestrator | Sunday 05 April 2026 00:58:54 +0000 (0:00:00.335) 0:00:00.335 ********** 2026-04-05 00:59:51.497876 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:51.497883 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:51.497892 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:51.497900 | orchestrator | 2026-04-05 00:59:51.497908 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:51.497915 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.337) 0:00:00.672 ********** 2026-04-05 00:59:51.497922 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-05 00:59:51.497930 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-05 00:59:51.497938 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-05 00:59:51.497945 | orchestrator | 2026-04-05 00:59:51.497952 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-05 00:59:51.497960 | orchestrator | 2026-04-05 00:59:51.497967 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 00:59:51.497976 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.340) 0:00:01.013 ********** 2026-04-05 00:59:51.497993 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:51.498002 | orchestrator | 2026-04-05 00:59:51.498069 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-05 00:59:51.498078 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:00.708) 0:00:01.721 ********** 2026-04-05 00:59:51.498099 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-05 00:59:51.498109 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-05 00:59:51.498117 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-05 00:59:51.498125 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-05 00:59:51.498134 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-05 00:59:51.498145 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:59:51.498164 | orchestrator | 2026-04-05 00:59:51.498172 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:51.498180 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498188 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498196 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498204 | orchestrator | 2026-04-05 00:59:51.498212 | orchestrator | 2026-04-05 00:59:51.498220 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:51.498229 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:53.731) 0:00:55.453 ********** 2026-04-05 00:59:51.498237 | orchestrator | =============================================================================== 2026-04-05 00:59:51.498245 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.73s 2026-04-05 00:59:51.498253 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.71s 2026-04-05 00:59:51.498260 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-04-05 00:59:51.498268 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-05 00:59:51.498276 | orchestrator | 2026-04-05 00:59:51.498284 | orchestrator | 2026-04-05 00:59:51.498291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:51.498299 | orchestrator | 2026-04-05 00:59:51.498308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:51.498316 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.366) 0:00:00.366 ********** 2026-04-05 00:59:51.498324 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:51.498333 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:51.498341 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:51.498349 | orchestrator | 2026-04-05 00:59:51.498357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:51.498365 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.305) 0:00:00.671 ********** 2026-04-05 00:59:51.498373 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-05 00:59:51.498381 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-05 00:59:51.498390 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-05 00:59:51.498399 | orchestrator | 2026-04-05 00:59:51.498408 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-05 00:59:51.498416 | orchestrator | 2026-04-05 00:59:51.498424 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 00:59:51.498431 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.396) 0:00:01.068 ********** 2026-04-05 00:59:51.498439 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:51.498447 | orchestrator | 2026-04-05 00:59:51.498454 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-05 00:59:51.498462 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:00.780) 0:00:01.849 ********** 2026-04-05 00:59:51.498470 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-05 00:59:51.498478 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-05 00:59:51.498486 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-05 00:59:51.498504 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-05 00:59:51.498513 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-05 00:59:51.498533 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 00:59:51.498542 | orchestrator | 2026-04-05 00:59:51.498550 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:51.498559 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498567 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498576 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:59:51.498584 | orchestrator | 2026-04-05 00:59:51.498592 | orchestrator | 2026-04-05 00:59:51.498600 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:51.498607 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:53.574) 0:00:55.423 ********** 2026-04-05 00:59:51.498616 | orchestrator | =============================================================================== 2026-04-05 00:59:51.498624 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.57s 2026-04-05 00:59:51.498632 | orchestrator | designate : include_tasks ----------------------------------------------- 0.78s 2026-04-05 00:59:51.498639 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-05 00:59:51.498647 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-05 00:59:51.498656 | orchestrator | 2026-04-05 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:54.563907 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state STARTED 2026-04-05 00:59:54.566210 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 00:59:54.569829 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:54.573922 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:54.573984 | orchestrator | 2026-04-05 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:57.623861 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task bb58c140-1d96-4e72-bba4-4ee76aaa85d5 is in state SUCCESS 2026-04-05 00:59:57.624961 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 00:59:57.626747 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 00:59:57.629990 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 00:59:57.631016 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 00:59:57.631405 | orchestrator | 2026-04-05 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:00.697353 | orchestrator | 2026-04-05 01:00:00 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:00.698900 | orchestrator | 2026-04-05 01:00:00 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:00.700873 | orchestrator | 2026-04-05 01:00:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:00.702735 | orchestrator | 2026-04-05 01:00:00 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:00.702816 | orchestrator | 2026-04-05 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:03.742411 | orchestrator | 2026-04-05 01:00:03 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:03.744737 | orchestrator | 2026-04-05 01:00:03 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:03.746752 | orchestrator | 2026-04-05 01:00:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:03.748720 | orchestrator | 2026-04-05 01:00:03 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:03.748775 | orchestrator | 2026-04-05 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:06.788656 | orchestrator | 2026-04-05 01:00:06 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:06.790750 | orchestrator | 2026-04-05 01:00:06 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:06.793305 | orchestrator | 2026-04-05 01:00:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:06.794666 | orchestrator | 2026-04-05 01:00:06 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:06.794699 | orchestrator | 2026-04-05 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:09.833526 | orchestrator | 2026-04-05 01:00:09 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:09.835160 | orchestrator | 2026-04-05 01:00:09 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:09.836693 | orchestrator | 2026-04-05 01:00:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:09.838788 | orchestrator | 2026-04-05 01:00:09 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:09.838848 | orchestrator | 2026-04-05 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:12.891154 | orchestrator | 2026-04-05 01:00:12 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:12.893255 | orchestrator | 2026-04-05 01:00:12 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:12.896140 | orchestrator | 2026-04-05 01:00:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:12.898632 | orchestrator | 2026-04-05 01:00:12 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:12.898690 | orchestrator | 2026-04-05 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:15.947638 | orchestrator | 2026-04-05 01:00:15 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:15.950318 | orchestrator | 2026-04-05 01:00:15 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:15.952900 | orchestrator | 2026-04-05 01:00:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:15.955705 | orchestrator | 2026-04-05 01:00:15 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:15.955799 | orchestrator | 2026-04-05 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:19.009103 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:19.010910 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:19.012742 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:19.014617 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:19.014669 | orchestrator | 2026-04-05 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:22.059312 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:22.061034 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:22.064316 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:22.066933 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:22.066993 | orchestrator | 2026-04-05 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:25.120724 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:25.123172 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:25.126128 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:25.127576 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:25.128049 | orchestrator | 2026-04-05 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:28.167871 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:28.169077 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:28.170480 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:28.172635 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:28.172695 | orchestrator | 2026-04-05 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:31.234477 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:31.236296 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:31.238237 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:31.241669 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state STARTED 2026-04-05 01:00:31.241726 | orchestrator | 2026-04-05 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:34.310564 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:34.312581 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state STARTED 2026-04-05 01:00:34.314841 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:34.321240 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task 242bea82-5abd-4501-8c47-9e8c51f4ffce is in state SUCCESS 2026-04-05 01:00:34.322365 | orchestrator | 2026-04-05 01:00:34.322404 | orchestrator | 2026-04-05 01:00:34.322412 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:00:34.322420 | orchestrator | 2026-04-05 01:00:34.322427 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:00:34.322434 | orchestrator | Sunday 05 April 2026 00:58:54 +0000 (0:00:00.337) 0:00:00.337 ********** 2026-04-05 01:00:34.322441 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.322450 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.322457 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.322464 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.322471 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.322477 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.322484 | orchestrator | 2026-04-05 01:00:34.322491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:00:34.322498 | orchestrator | Sunday 05 April 2026 00:58:55 +0000 (0:00:00.717) 0:00:01.055 ********** 2026-04-05 01:00:34.322506 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-05 01:00:34.322512 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-05 01:00:34.322516 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-05 01:00:34.322521 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-05 01:00:34.322525 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-05 01:00:34.322530 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-05 01:00:34.322534 | orchestrator | 2026-04-05 01:00:34.322538 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-05 01:00:34.322542 | orchestrator | 2026-04-05 01:00:34.322548 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:00:34.322555 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:00.809) 0:00:01.865 ********** 2026-04-05 01:00:34.322563 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.322571 | orchestrator | 2026-04-05 01:00:34.322578 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-05 01:00:34.322585 | orchestrator | Sunday 05 April 2026 00:58:57 +0000 (0:00:01.280) 0:00:03.145 ********** 2026-04-05 01:00:34.322591 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.322597 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.322603 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.322610 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.322616 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.322622 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.322629 | orchestrator | 2026-04-05 01:00:34.322636 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-05 01:00:34.322643 | orchestrator | Sunday 05 April 2026 00:58:59 +0000 (0:00:01.453) 0:00:04.598 ********** 2026-04-05 01:00:34.322650 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.322657 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.322664 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.322671 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.322677 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.322684 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.322692 | orchestrator | 2026-04-05 01:00:34.322699 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-05 01:00:34.322707 | orchestrator | Sunday 05 April 2026 00:59:00 +0000 (0:00:01.217) 0:00:05.815 ********** 2026-04-05 01:00:34.322715 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.322723 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.322730 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.322738 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.322766 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.322774 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.322781 | orchestrator | 2026-04-05 01:00:34.322788 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-05 01:00:34.322795 | orchestrator | Sunday 05 April 2026 00:59:01 +0000 (0:00:00.608) 0:00:06.424 ********** 2026-04-05 01:00:34.322816 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.322823 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.322829 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.322835 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.322842 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.322848 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.322855 | orchestrator | 2026-04-05 01:00:34.322862 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-05 01:00:34.322869 | orchestrator | Sunday 05 April 2026 00:59:01 +0000 (0:00:00.809) 0:00:07.233 ********** 2026-04-05 01:00:34.322876 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-05 01:00:34.322885 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-05 01:00:34.322892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-05 01:00:34.322899 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-05 01:00:34.322906 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-05 01:00:34.322915 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 01:00:34.322923 | orchestrator | 2026-04-05 01:00:34.322930 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:00:34.322950 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.322958 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.323042 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.323104 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.323116 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.323123 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:00:34.323130 | orchestrator | 2026-04-05 01:00:34.323137 | orchestrator | 2026-04-05 01:00:34.323144 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:00:34.323151 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:53.164) 0:01:00.398 ********** 2026-04-05 01:00:34.323158 | orchestrator | =============================================================================== 2026-04-05 01:00:34.323165 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 53.16s 2026-04-05 01:00:34.323172 | orchestrator | neutron : Get container facts ------------------------------------------- 1.45s 2026-04-05 01:00:34.323178 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.28s 2026-04-05 01:00:34.323196 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.22s 2026-04-05 01:00:34.323202 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-04-05 01:00:34.323209 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.81s 2026-04-05 01:00:34.323215 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-04-05 01:00:34.323222 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.61s 2026-04-05 01:00:34.323228 | orchestrator | 2026-04-05 01:00:34.324225 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:00:34.324256 | orchestrator | 2.16.14 2026-04-05 01:00:34.324261 | orchestrator | 2026-04-05 01:00:34.324265 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-05 01:00:34.324270 | orchestrator | 2026-04-05 01:00:34.324275 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 01:00:34.324279 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:00.757) 0:00:00.757 ********** 2026-04-05 01:00:34.324284 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.324289 | orchestrator | 2026-04-05 01:00:34.324293 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 01:00:34.324297 | orchestrator | Sunday 05 April 2026 00:48:33 +0000 (0:00:01.068) 0:00:01.825 ********** 2026-04-05 01:00:34.324301 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324306 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324310 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324314 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324318 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324322 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324326 | orchestrator | 2026-04-05 01:00:34.324331 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 01:00:34.324335 | orchestrator | Sunday 05 April 2026 00:48:35 +0000 (0:00:01.874) 0:00:03.700 ********** 2026-04-05 01:00:34.324339 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324343 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324347 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324351 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324355 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324359 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324363 | orchestrator | 2026-04-05 01:00:34.324368 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 01:00:34.324372 | orchestrator | Sunday 05 April 2026 00:48:36 +0000 (0:00:00.728) 0:00:04.428 ********** 2026-04-05 01:00:34.324376 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324380 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324384 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324388 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324392 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324396 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324400 | orchestrator | 2026-04-05 01:00:34.324404 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 01:00:34.324408 | orchestrator | Sunday 05 April 2026 00:48:37 +0000 (0:00:00.959) 0:00:05.387 ********** 2026-04-05 01:00:34.324412 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324416 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324420 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324424 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324428 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324433 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324437 | orchestrator | 2026-04-05 01:00:34.324441 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 01:00:34.324445 | orchestrator | Sunday 05 April 2026 00:48:38 +0000 (0:00:00.954) 0:00:06.342 ********** 2026-04-05 01:00:34.324449 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324479 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324483 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324488 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324492 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324496 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324500 | orchestrator | 2026-04-05 01:00:34.324504 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 01:00:34.324508 | orchestrator | Sunday 05 April 2026 00:48:39 +0000 (0:00:01.084) 0:00:07.427 ********** 2026-04-05 01:00:34.324530 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324534 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324538 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324542 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324546 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324550 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324555 | orchestrator | 2026-04-05 01:00:34.324640 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 01:00:34.324651 | orchestrator | Sunday 05 April 2026 00:48:40 +0000 (0:00:01.222) 0:00:08.649 ********** 2026-04-05 01:00:34.324655 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.324660 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.324664 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.324668 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.324673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.324677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.324681 | orchestrator | 2026-04-05 01:00:34.324685 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 01:00:34.324689 | orchestrator | Sunday 05 April 2026 00:48:41 +0000 (0:00:00.870) 0:00:09.520 ********** 2026-04-05 01:00:34.324693 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324697 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324701 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324705 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324709 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324713 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324717 | orchestrator | 2026-04-05 01:00:34.324722 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 01:00:34.324726 | orchestrator | Sunday 05 April 2026 00:48:42 +0000 (0:00:01.324) 0:00:10.844 ********** 2026-04-05 01:00:34.324730 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:34.324734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.324738 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.324742 | orchestrator | 2026-04-05 01:00:34.324746 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 01:00:34.324751 | orchestrator | Sunday 05 April 2026 00:48:44 +0000 (0:00:01.900) 0:00:12.745 ********** 2026-04-05 01:00:34.324755 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.324759 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.324766 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.324783 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.324790 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.324796 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.324803 | orchestrator | 2026-04-05 01:00:34.324809 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 01:00:34.324816 | orchestrator | Sunday 05 April 2026 00:48:47 +0000 (0:00:02.970) 0:00:15.715 ********** 2026-04-05 01:00:34.324822 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:34.324829 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.324835 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.324842 | orchestrator | 2026-04-05 01:00:34.324848 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 01:00:34.324862 | orchestrator | Sunday 05 April 2026 00:48:51 +0000 (0:00:03.971) 0:00:19.687 ********** 2026-04-05 01:00:34.324871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:34.324879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:34.324885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:34.324897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.324904 | orchestrator | 2026-04-05 01:00:34.324911 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 01:00:34.324918 | orchestrator | Sunday 05 April 2026 00:48:52 +0000 (0:00:00.736) 0:00:20.424 ********** 2026-04-05 01:00:34.324926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.324936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.324944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.324952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.324959 | orchestrator | 2026-04-05 01:00:34.324988 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 01:00:34.324997 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:00.862) 0:00:21.286 ********** 2026-04-05 01:00:34.325006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325032 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325039 | orchestrator | 2026-04-05 01:00:34.325046 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 01:00:34.325053 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:00.192) 0:00:21.478 ********** 2026-04-05 01:00:34.325069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 00:48:49.191211', 'end': '2026-04-05 00:48:49.286390', 'delta': '0:00:00.095179', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 00:48:49.879636', 'end': '2026-04-05 00:48:49.963647', 'delta': '0:00:00.084011', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 00:48:51.344185', 'end': '2026-04-05 00:48:51.462064', 'delta': '0:00:00.117879', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.325107 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325114 | orchestrator | 2026-04-05 01:00:34.325121 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 01:00:34.325128 | orchestrator | Sunday 05 April 2026 00:48:53 +0000 (0:00:00.291) 0:00:21.770 ********** 2026-04-05 01:00:34.325135 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.325142 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.325149 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.325156 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.325162 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.325169 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.325177 | orchestrator | 2026-04-05 01:00:34.325184 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 01:00:34.325191 | orchestrator | Sunday 05 April 2026 00:48:58 +0000 (0:00:04.952) 0:00:26.722 ********** 2026-04-05 01:00:34.325198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.325205 | orchestrator | 2026-04-05 01:00:34.325212 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 01:00:34.325219 | orchestrator | Sunday 05 April 2026 00:49:00 +0000 (0:00:02.102) 0:00:28.825 ********** 2026-04-05 01:00:34.325226 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325233 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325240 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325247 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325260 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325268 | orchestrator | 2026-04-05 01:00:34.325275 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 01:00:34.325282 | orchestrator | Sunday 05 April 2026 00:49:03 +0000 (0:00:02.349) 0:00:31.175 ********** 2026-04-05 01:00:34.325289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325296 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325303 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325310 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325316 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325323 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325337 | orchestrator | 2026-04-05 01:00:34.325341 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:34.325346 | orchestrator | Sunday 05 April 2026 00:49:05 +0000 (0:00:02.015) 0:00:33.191 ********** 2026-04-05 01:00:34.325350 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325359 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325365 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325385 | orchestrator | 2026-04-05 01:00:34.325392 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 01:00:34.325398 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:01.006) 0:00:34.198 ********** 2026-04-05 01:00:34.325405 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325412 | orchestrator | 2026-04-05 01:00:34.325419 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 01:00:34.325426 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:00.381) 0:00:34.580 ********** 2026-04-05 01:00:34.325433 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325440 | orchestrator | 2026-04-05 01:00:34.325447 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:34.325453 | orchestrator | Sunday 05 April 2026 00:49:06 +0000 (0:00:00.193) 0:00:34.774 ********** 2026-04-05 01:00:34.325460 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325467 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325474 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325487 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325493 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325502 | orchestrator | 2026-04-05 01:00:34.325506 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 01:00:34.325510 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:00.968) 0:00:35.742 ********** 2026-04-05 01:00:34.325514 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325518 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325522 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325527 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325535 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325539 | orchestrator | 2026-04-05 01:00:34.325543 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 01:00:34.325547 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:00.999) 0:00:36.741 ********** 2026-04-05 01:00:34.325551 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325555 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325560 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325564 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325568 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325572 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325576 | orchestrator | 2026-04-05 01:00:34.325584 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 01:00:34.325589 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:00.895) 0:00:37.637 ********** 2026-04-05 01:00:34.325593 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325597 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325601 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325605 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325609 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325613 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325618 | orchestrator | 2026-04-05 01:00:34.325622 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 01:00:34.325626 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:00.871) 0:00:38.508 ********** 2026-04-05 01:00:34.325635 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325639 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325644 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325648 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325660 | orchestrator | 2026-04-05 01:00:34.325664 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 01:00:34.325668 | orchestrator | Sunday 05 April 2026 00:49:11 +0000 (0:00:01.052) 0:00:39.560 ********** 2026-04-05 01:00:34.325673 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325677 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325681 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325685 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325689 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325693 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325697 | orchestrator | 2026-04-05 01:00:34.325701 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 01:00:34.325706 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:00.966) 0:00:40.527 ********** 2026-04-05 01:00:34.325710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325714 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.325718 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.325722 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.325726 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.325730 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.325734 | orchestrator | 2026-04-05 01:00:34.325738 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 01:00:34.325743 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:01.197) 0:00:41.724 ********** 2026-04-05 01:00:34.325748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87', 'dm-uuid-LVM-nQrLrg0BqHGe1A9RVbz4Nu5m0j1vrxufGT7BkWGPm6gLoI0ePIQomnNlHNuIq6pw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670', 'dm-uuid-LVM-pJGTtVd0YecZ46sZFLKOsdsVdlJcVA2onJ2hK2zOqpuPcYhfTRgtwIbvSdlkRXVQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff', 'dm-uuid-LVM-SdC8ztndVEjqDn76uiYoCnN9YKXW866zw4C7S5cpDRFMGMeV03iItzmsABbAOW1Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621', 'dm-uuid-LVM-40atcOoPTn2r7zcM8xjzJcp5DSddbcu8P5CKkZSxNZ31yxB89hSU7w4vE8f6IoHH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iMmvsK-0VwP-4LtN-JgAY-8KwJ-Qkzj-Sd6GTm', 'scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087', 'scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rUfKFI-2hLM-RLIH-NWWZ-lLs3-Hr3n-6MCkAD', 'scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966', 'scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30', 'scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325905 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.325913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.325921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GyMLKF-vHry-RHc8-7cfV-tFf6-qbXV-SENsCI', 'scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2', 'scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.325931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6y2lAn-RokD-h7cF-v8Tu-gO13-n3Fe-OrwFWK', 'scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767', 'scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92', 'scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603', 'dm-uuid-LVM-64tzOHgG53FLXCSb5I0VPAT3nsukjOL16ewcjji0zoeq4oyylltpfn74y4tIzZcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418', 'dm-uuid-LVM-5ExjhmqCLQRr1pQ6CfVmcM7UkPJni7dcskxYgjKgsBz0rDCEmGj1VvwWLGVzvCuY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326248 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.326254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part1', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part14', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part15', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part16', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a29ITw-PgrV-2Yfg-fVgD-Du8V-njBE-NtDgfI', 'scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9', 'scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5C5KmZ-D4SZ-KmL6-Wc4J-nTNr-URgw-SQyKn9', 'scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304', 'scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d', 'scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326438 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.326446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326507 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.326511 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.326516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:34.326562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:34.326652 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.326656 | orchestrator | 2026-04-05 01:00:34.326663 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 01:00:34.326672 | orchestrator | Sunday 05 April 2026 00:49:16 +0000 (0:00:02.189) 0:00:43.915 ********** 2026-04-05 01:00:34.326688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87', 'dm-uuid-LVM-nQrLrg0BqHGe1A9RVbz4Nu5m0j1vrxufGT7BkWGPm6gLoI0ePIQomnNlHNuIq6pw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.326696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff', 'dm-uuid-LVM-SdC8ztndVEjqDn76uiYoCnN9YKXW866zw4C7S5cpDRFMGMeV03iItzmsABbAOW1Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.326704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621', 'dm-uuid-LVM-40atcOoPTn2r7zcM8xjzJcp5DSddbcu8P5CKkZSxNZ31yxB89hSU7w4vE8f6IoHH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.326718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670', 'dm-uuid-LVM-pJGTtVd0YecZ46sZFLKOsdsVdlJcVA2onJ2hK2zOqpuPcYhfTRgtwIbvSdlkRXVQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iMmvsK-0VwP-4LtN-JgAY-8KwJ-Qkzj-Sd6GTm', 'scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087', 'scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327934 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GyMLKF-vHry-RHc8-7cfV-tFf6-qbXV-SENsCI', 'scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2', 'scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6y2lAn-RokD-h7cF-v8Tu-gO13-n3Fe-OrwFWK', 'scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767', 'scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.327992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92', 'scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rUfKFI-2hLM-RLIH-NWWZ-lLs3-Hr3n-6MCkAD', 'scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966', 'scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30', 'scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328051 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603', 'dm-uuid-LVM-64tzOHgG53FLXCSb5I0VPAT3nsukjOL16ewcjji0zoeq4oyylltpfn74y4tIzZcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418', 'dm-uuid-LVM-5ExjhmqCLQRr1pQ6CfVmcM7UkPJni7dcskxYgjKgsBz0rDCEmGj1VvwWLGVzvCuY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328214 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part1', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part14', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part15', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part16', 'scsi-SQEMU_QEMU_HARDDISK_a8ff504c-b3d4-442d-8fb6-d5534c9b4dcc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328266 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.328277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a29ITw-PgrV-2Yfg-fVgD-Du8V-njBE-NtDgfI', 'scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9', 'scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328356 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5C5KmZ-D4SZ-KmL6-Wc4J-nTNr-URgw-SQyKn9', 'scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304', 'scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328364 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d', 'scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328436 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328464 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328482 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328490 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328514 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328527 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328542 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_cff99bdd-b08e-41cb-b514-098ff9f837f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328553 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328562 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.328574 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.328583 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.328591 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.328600 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328627 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328634 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328641 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.328949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.329181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.329223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.329233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f8d9972-819f-4b60-a30c-3e1038c24698-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.329242 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:34.329257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.329265 | orchestrator | 2026-04-05 01:00:34.329322 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 01:00:34.329333 | orchestrator | Sunday 05 April 2026 00:49:19 +0000 (0:00:03.291) 0:00:47.206 ********** 2026-04-05 01:00:34.329341 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.329349 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.329357 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.329364 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.329371 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.329444 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.329453 | orchestrator | 2026-04-05 01:00:34.329461 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 01:00:34.329467 | orchestrator | Sunday 05 April 2026 00:49:21 +0000 (0:00:02.103) 0:00:49.310 ********** 2026-04-05 01:00:34.329473 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.329479 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.329484 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.329491 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.329497 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.329504 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.329509 | orchestrator | 2026-04-05 01:00:34.329516 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:34.329522 | orchestrator | Sunday 05 April 2026 00:49:22 +0000 (0:00:01.436) 0:00:50.747 ********** 2026-04-05 01:00:34.329529 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.329537 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.329607 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.329617 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.329624 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.329630 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.329668 | orchestrator | 2026-04-05 01:00:34.329717 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:34.329724 | orchestrator | Sunday 05 April 2026 00:49:24 +0000 (0:00:01.268) 0:00:52.015 ********** 2026-04-05 01:00:34.329731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.329738 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.329745 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.329753 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.329894 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.329903 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.329910 | orchestrator | 2026-04-05 01:00:34.329918 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:34.329925 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:01.663) 0:00:53.679 ********** 2026-04-05 01:00:34.329932 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.329960 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.330117 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.330194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.330204 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.330212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.330219 | orchestrator | 2026-04-05 01:00:34.330226 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:34.330234 | orchestrator | Sunday 05 April 2026 00:49:27 +0000 (0:00:01.325) 0:00:55.004 ********** 2026-04-05 01:00:34.330242 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.330249 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.330467 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.330485 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.330493 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.330501 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.330508 | orchestrator | 2026-04-05 01:00:34.330515 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 01:00:34.330522 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:01.260) 0:00:56.265 ********** 2026-04-05 01:00:34.330578 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 01:00:34.330587 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 01:00:34.330593 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 01:00:34.330599 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 01:00:34.330606 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 01:00:34.330611 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 01:00:34.330618 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:00:34.330624 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 01:00:34.330630 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 01:00:34.330638 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 01:00:34.330645 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 01:00:34.330652 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 01:00:34.330658 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 01:00:34.330665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 01:00:34.330672 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 01:00:34.330679 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 01:00:34.330686 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 01:00:34.330693 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 01:00:34.330700 | orchestrator | 2026-04-05 01:00:34.330708 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 01:00:34.330715 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:04.424) 0:01:00.690 ********** 2026-04-05 01:00:34.330722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:34.330729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:34.330736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:34.330743 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.330750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 01:00:34.330757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 01:00:34.330764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 01:00:34.330772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.330779 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 01:00:34.330820 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 01:00:34.330829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 01:00:34.330873 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.330917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:34.331051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:34.331060 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:34.331067 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.331075 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 01:00:34.331082 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 01:00:34.331089 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 01:00:34.331095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.331102 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 01:00:34.331110 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 01:00:34.331117 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 01:00:34.331124 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.331131 | orchestrator | 2026-04-05 01:00:34.331145 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 01:00:34.331153 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:02.496) 0:01:03.186 ********** 2026-04-05 01:00:34.331169 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.331176 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.331184 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.331192 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.331199 | orchestrator | 2026-04-05 01:00:34.331206 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:00:34.331214 | orchestrator | Sunday 05 April 2026 00:49:36 +0000 (0:00:01.630) 0:01:04.817 ********** 2026-04-05 01:00:34.331221 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331228 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.331236 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.331243 | orchestrator | 2026-04-05 01:00:34.331250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:00:34.331258 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:00.542) 0:01:05.360 ********** 2026-04-05 01:00:34.331265 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331271 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.331276 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.331282 | orchestrator | 2026-04-05 01:00:34.331289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:00:34.331296 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:00.551) 0:01:05.911 ********** 2026-04-05 01:00:34.331303 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331310 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.331318 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.331325 | orchestrator | 2026-04-05 01:00:34.331332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:00:34.331339 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:00.783) 0:01:06.695 ********** 2026-04-05 01:00:34.331347 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.331354 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.331361 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.331369 | orchestrator | 2026-04-05 01:00:34.331376 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:00:34.331383 | orchestrator | Sunday 05 April 2026 00:49:40 +0000 (0:00:01.733) 0:01:08.428 ********** 2026-04-05 01:00:34.331390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.331396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.331403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.331409 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331415 | orchestrator | 2026-04-05 01:00:34.331422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:00:34.331429 | orchestrator | Sunday 05 April 2026 00:49:41 +0000 (0:00:00.645) 0:01:09.074 ********** 2026-04-05 01:00:34.331436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.331443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.331451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.331458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331465 | orchestrator | 2026-04-05 01:00:34.331472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:00:34.331480 | orchestrator | Sunday 05 April 2026 00:49:42 +0000 (0:00:01.004) 0:01:10.079 ********** 2026-04-05 01:00:34.331488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.331496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.331504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.331512 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331520 | orchestrator | 2026-04-05 01:00:34.331533 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:00:34.331541 | orchestrator | Sunday 05 April 2026 00:49:43 +0000 (0:00:00.897) 0:01:10.977 ********** 2026-04-05 01:00:34.331549 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.331555 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.331561 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.331567 | orchestrator | 2026-04-05 01:00:34.331573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:00:34.331579 | orchestrator | Sunday 05 April 2026 00:49:43 +0000 (0:00:00.751) 0:01:11.728 ********** 2026-04-05 01:00:34.331586 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:34.331592 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:34.331629 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:00:34.331637 | orchestrator | 2026-04-05 01:00:34.331643 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 01:00:34.331650 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:00.721) 0:01:12.450 ********** 2026-04-05 01:00:34.331657 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:34.331665 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.331671 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.331677 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:34.331683 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:34.331689 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:34.331695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:34.331702 | orchestrator | 2026-04-05 01:00:34.331708 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 01:00:34.331721 | orchestrator | Sunday 05 April 2026 00:49:45 +0000 (0:00:01.197) 0:01:13.648 ********** 2026-04-05 01:00:34.331728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:34.331734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.331741 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.331748 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:34.331756 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:34.331763 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:34.331770 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:34.331776 | orchestrator | 2026-04-05 01:00:34.331784 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.331792 | orchestrator | Sunday 05 April 2026 00:49:49 +0000 (0:00:03.280) 0:01:16.928 ********** 2026-04-05 01:00:34.331802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.331812 | orchestrator | 2026-04-05 01:00:34.331819 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.331827 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:01.435) 0:01:18.363 ********** 2026-04-05 01:00:34.331835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.331845 | orchestrator | 2026-04-05 01:00:34.331853 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.331861 | orchestrator | Sunday 05 April 2026 00:49:52 +0000 (0:00:01.691) 0:01:20.054 ********** 2026-04-05 01:00:34.331875 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.331883 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.331891 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.331900 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.331907 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.331916 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.331923 | orchestrator | 2026-04-05 01:00:34.331931 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.331939 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:01.339) 0:01:21.394 ********** 2026-04-05 01:00:34.331948 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.331956 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.331963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332023 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332031 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332039 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332047 | orchestrator | 2026-04-05 01:00:34.332056 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.332064 | orchestrator | Sunday 05 April 2026 00:49:54 +0000 (0:00:01.010) 0:01:22.405 ********** 2026-04-05 01:00:34.332072 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332080 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332087 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332094 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332102 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332109 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332116 | orchestrator | 2026-04-05 01:00:34.332123 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.332131 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.789) 0:01:23.194 ********** 2026-04-05 01:00:34.332138 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332145 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332152 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332174 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332181 | orchestrator | 2026-04-05 01:00:34.332188 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.332195 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.827) 0:01:24.022 ********** 2026-04-05 01:00:34.332202 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332210 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332217 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332224 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.332232 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.332272 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.332279 | orchestrator | 2026-04-05 01:00:34.332285 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.332292 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:01.290) 0:01:25.312 ********** 2026-04-05 01:00:34.332300 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332307 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332315 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332322 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332329 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332336 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332344 | orchestrator | 2026-04-05 01:00:34.332351 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.332358 | orchestrator | Sunday 05 April 2026 00:49:59 +0000 (0:00:01.685) 0:01:26.998 ********** 2026-04-05 01:00:34.332365 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332373 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332380 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332393 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332401 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332408 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332415 | orchestrator | 2026-04-05 01:00:34.332423 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.332436 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:00.902) 0:01:27.900 ********** 2026-04-05 01:00:34.332443 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332450 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332457 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332464 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.332471 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.332478 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.332486 | orchestrator | 2026-04-05 01:00:34.332493 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.332500 | orchestrator | Sunday 05 April 2026 00:50:01 +0000 (0:00:01.576) 0:01:29.477 ********** 2026-04-05 01:00:34.332507 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332515 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332522 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332530 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.332537 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.332544 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.332551 | orchestrator | 2026-04-05 01:00:34.332559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.332566 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:02.194) 0:01:31.671 ********** 2026-04-05 01:00:34.332573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332580 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332587 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332595 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332602 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332609 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332616 | orchestrator | 2026-04-05 01:00:34.332623 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.332630 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:01.217) 0:01:32.889 ********** 2026-04-05 01:00:34.332637 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332644 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332651 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332659 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.332666 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.332673 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.332680 | orchestrator | 2026-04-05 01:00:34.332687 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.332694 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:00.942) 0:01:33.831 ********** 2026-04-05 01:00:34.332700 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332706 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332712 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332726 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332733 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332740 | orchestrator | 2026-04-05 01:00:34.332746 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.332754 | orchestrator | Sunday 05 April 2026 00:50:06 +0000 (0:00:00.948) 0:01:34.780 ********** 2026-04-05 01:00:34.332761 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332767 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332775 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332781 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332787 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332793 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332798 | orchestrator | 2026-04-05 01:00:34.332805 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.332818 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:00.913) 0:01:35.694 ********** 2026-04-05 01:00:34.332825 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.332832 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.332839 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.332846 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332853 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332860 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332867 | orchestrator | 2026-04-05 01:00:34.332874 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.332882 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:01.105) 0:01:36.799 ********** 2026-04-05 01:00:34.332889 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332896 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332903 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332911 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.332918 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.332925 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.332932 | orchestrator | 2026-04-05 01:00:34.332940 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.332947 | orchestrator | Sunday 05 April 2026 00:50:09 +0000 (0:00:00.862) 0:01:37.662 ********** 2026-04-05 01:00:34.332954 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.332961 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.332984 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.332992 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333028 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333044 | orchestrator | 2026-04-05 01:00:34.333051 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.333058 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:01.028) 0:01:38.691 ********** 2026-04-05 01:00:34.333065 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333072 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333079 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333086 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.333094 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.333101 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.333108 | orchestrator | 2026-04-05 01:00:34.333116 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.333123 | orchestrator | Sunday 05 April 2026 00:50:11 +0000 (0:00:00.981) 0:01:39.672 ********** 2026-04-05 01:00:34.333130 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.333138 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.333145 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.333152 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.333159 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.333166 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.333173 | orchestrator | 2026-04-05 01:00:34.333181 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.333193 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:01.135) 0:01:40.807 ********** 2026-04-05 01:00:34.333200 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.333207 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.333215 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.333222 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.333229 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.333236 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.333244 | orchestrator | 2026-04-05 01:00:34.333250 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 01:00:34.333258 | orchestrator | Sunday 05 April 2026 00:50:14 +0000 (0:00:01.887) 0:01:42.695 ********** 2026-04-05 01:00:34.333265 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.333271 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.333277 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.333289 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.333297 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.333304 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.333311 | orchestrator | 2026-04-05 01:00:34.333318 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 01:00:34.333325 | orchestrator | Sunday 05 April 2026 00:50:18 +0000 (0:00:03.454) 0:01:46.150 ********** 2026-04-05 01:00:34.333332 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.333340 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.333347 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.333355 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.333362 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.333369 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.333376 | orchestrator | 2026-04-05 01:00:34.333384 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 01:00:34.333391 | orchestrator | Sunday 05 April 2026 00:50:20 +0000 (0:00:02.623) 0:01:48.773 ********** 2026-04-05 01:00:34.333399 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.333406 | orchestrator | 2026-04-05 01:00:34.333414 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 01:00:34.333421 | orchestrator | Sunday 05 April 2026 00:50:22 +0000 (0:00:01.514) 0:01:50.287 ********** 2026-04-05 01:00:34.333429 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333436 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333443 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333451 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333458 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333465 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333472 | orchestrator | 2026-04-05 01:00:34.333480 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 01:00:34.333487 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:01.052) 0:01:51.339 ********** 2026-04-05 01:00:34.333494 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333501 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333509 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333516 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333524 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333531 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333538 | orchestrator | 2026-04-05 01:00:34.333546 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 01:00:34.333552 | orchestrator | Sunday 05 April 2026 00:50:24 +0000 (0:00:00.982) 0:01:52.322 ********** 2026-04-05 01:00:34.333560 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333567 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333575 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333583 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333590 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333597 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333604 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333612 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333619 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333626 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 01:00:34.333656 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333674 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 01:00:34.333681 | orchestrator | 2026-04-05 01:00:34.333687 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 01:00:34.333695 | orchestrator | Sunday 05 April 2026 00:50:26 +0000 (0:00:02.332) 0:01:54.654 ********** 2026-04-05 01:00:34.333701 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.333709 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.333716 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.333723 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.333730 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.333737 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.333745 | orchestrator | 2026-04-05 01:00:34.333752 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 01:00:34.333759 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:01.269) 0:01:55.923 ********** 2026-04-05 01:00:34.333766 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333774 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333781 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333788 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333815 | orchestrator | 2026-04-05 01:00:34.333822 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 01:00:34.333829 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:01.163) 0:01:57.087 ********** 2026-04-05 01:00:34.333837 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333844 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333851 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333858 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333865 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333871 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333877 | orchestrator | 2026-04-05 01:00:34.333884 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 01:00:34.333891 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:00.715) 0:01:57.803 ********** 2026-04-05 01:00:34.333897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.333904 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.333911 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.333918 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.333926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.333933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.333940 | orchestrator | 2026-04-05 01:00:34.333947 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 01:00:34.333954 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:01.267) 0:01:59.071 ********** 2026-04-05 01:00:34.333962 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.333985 | orchestrator | 2026-04-05 01:00:34.333991 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 01:00:34.333997 | orchestrator | Sunday 05 April 2026 00:50:32 +0000 (0:00:01.775) 0:02:00.846 ********** 2026-04-05 01:00:34.334004 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.334011 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.334048 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.334055 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.334063 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.334070 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.334078 | orchestrator | 2026-04-05 01:00:34.334086 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 01:00:34.334093 | orchestrator | Sunday 05 April 2026 00:51:36 +0000 (0:01:03.080) 0:03:03.926 ********** 2026-04-05 01:00:34.334111 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334118 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334126 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334133 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334141 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334148 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334156 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334163 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334171 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334178 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334186 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334193 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334201 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334208 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334216 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334224 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334231 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334239 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334246 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334253 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334289 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 01:00:34.334298 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 01:00:34.334305 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 01:00:34.334312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334319 | orchestrator | 2026-04-05 01:00:34.334326 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 01:00:34.334333 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:01.021) 0:03:04.947 ********** 2026-04-05 01:00:34.334341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334348 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334355 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334369 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334376 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334383 | orchestrator | 2026-04-05 01:00:34.334390 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 01:00:34.334398 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:00.797) 0:03:05.745 ********** 2026-04-05 01:00:34.334405 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334413 | orchestrator | 2026-04-05 01:00:34.334424 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 01:00:34.334432 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:00.164) 0:03:05.909 ********** 2026-04-05 01:00:34.334439 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334447 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334454 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334461 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334469 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334475 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334482 | orchestrator | 2026-04-05 01:00:34.334495 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 01:00:34.334503 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:00.916) 0:03:06.826 ********** 2026-04-05 01:00:34.334510 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334518 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334525 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334540 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334547 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334554 | orchestrator | 2026-04-05 01:00:34.334561 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 01:00:34.334568 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.690) 0:03:07.516 ********** 2026-04-05 01:00:34.334576 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334583 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334590 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334612 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334619 | orchestrator | 2026-04-05 01:00:34.334626 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 01:00:34.334633 | orchestrator | Sunday 05 April 2026 00:51:40 +0000 (0:00:01.090) 0:03:08.606 ********** 2026-04-05 01:00:34.334641 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.334648 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.334655 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.334663 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.334670 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.334678 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.334685 | orchestrator | 2026-04-05 01:00:34.334692 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 01:00:34.334699 | orchestrator | Sunday 05 April 2026 00:51:44 +0000 (0:00:03.361) 0:03:11.967 ********** 2026-04-05 01:00:34.334705 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.334711 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.334718 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.334725 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.334732 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.334739 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.334746 | orchestrator | 2026-04-05 01:00:34.334753 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 01:00:34.334760 | orchestrator | Sunday 05 April 2026 00:51:44 +0000 (0:00:00.893) 0:03:12.861 ********** 2026-04-05 01:00:34.334768 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.334777 | orchestrator | 2026-04-05 01:00:34.334784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 01:00:34.334791 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:01.221) 0:03:14.083 ********** 2026-04-05 01:00:34.334798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334827 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334834 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334841 | orchestrator | 2026-04-05 01:00:34.334848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 01:00:34.334855 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:00.676) 0:03:14.760 ********** 2026-04-05 01:00:34.334862 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334869 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334877 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334884 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.334895 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334903 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.334910 | orchestrator | 2026-04-05 01:00:34.334917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 01:00:34.334924 | orchestrator | Sunday 05 April 2026 00:51:47 +0000 (0:00:01.104) 0:03:15.864 ********** 2026-04-05 01:00:34.334931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.334939 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.334981 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.334990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.334996 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335003 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335009 | orchestrator | 2026-04-05 01:00:34.335016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 01:00:34.335023 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:01.040) 0:03:16.905 ********** 2026-04-05 01:00:34.335031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.335038 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.335045 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.335052 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335059 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.335066 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335073 | orchestrator | 2026-04-05 01:00:34.335080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 01:00:34.335087 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:01.032) 0:03:17.937 ********** 2026-04-05 01:00:34.335095 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.335101 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.335108 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.335116 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.335134 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335142 | orchestrator | 2026-04-05 01:00:34.335149 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 01:00:34.335156 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:00.717) 0:03:18.655 ********** 2026-04-05 01:00:34.335163 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.335170 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.335177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.335184 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335192 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.335199 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335206 | orchestrator | 2026-04-05 01:00:34.335213 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 01:00:34.335220 | orchestrator | Sunday 05 April 2026 00:51:51 +0000 (0:00:00.875) 0:03:19.530 ********** 2026-04-05 01:00:34.335228 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.335235 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.335242 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.335249 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335256 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.335264 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335270 | orchestrator | 2026-04-05 01:00:34.335276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 01:00:34.335282 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:00.624) 0:03:20.155 ********** 2026-04-05 01:00:34.335289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.335297 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.335305 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.335311 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.335318 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.335326 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.335339 | orchestrator | 2026-04-05 01:00:34.335346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 01:00:34.335354 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:00.643) 0:03:20.798 ********** 2026-04-05 01:00:34.335360 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.335368 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.335375 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.335382 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.335390 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.335397 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.335404 | orchestrator | 2026-04-05 01:00:34.335411 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 01:00:34.335418 | orchestrator | Sunday 05 April 2026 00:51:54 +0000 (0:00:01.348) 0:03:22.147 ********** 2026-04-05 01:00:34.335426 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.335433 | orchestrator | 2026-04-05 01:00:34.335440 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 01:00:34.335447 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:01.531) 0:03:23.679 ********** 2026-04-05 01:00:34.335454 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 01:00:34.335462 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 01:00:34.335469 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335477 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335484 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 01:00:34.335491 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335499 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335506 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 01:00:34.335513 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335520 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 01:00:34.335527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335535 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335542 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 01:00:34.335550 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335557 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335571 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335579 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335611 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 01:00:34.335620 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335627 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335642 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335649 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335656 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 01:00:34.335664 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335678 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335685 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335692 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335700 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 01:00:34.335712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335723 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335731 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 01:00:34.335754 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335769 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335776 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335783 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335791 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 01:00:34.335806 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335813 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335820 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.335827 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335835 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.335842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335849 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 01:00:34.335856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335871 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.335878 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335886 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.335893 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 01:00:34.335900 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.335907 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.335915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335922 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.335929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.335937 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 01:00:34.335944 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.335951 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.335958 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336010 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336021 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.336029 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 01:00:34.336036 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.336043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.336050 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336068 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.336075 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 01:00:34.336082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336088 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.336124 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 01:00:34.336133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336140 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 01:00:34.336147 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 01:00:34.336154 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336161 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 01:00:34.336168 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336175 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 01:00:34.336183 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 01:00:34.336190 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336197 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 01:00:34.336204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336211 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 01:00:34.336223 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 01:00:34.336231 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 01:00:34.336238 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 01:00:34.336245 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 01:00:34.336252 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 01:00:34.336259 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 01:00:34.336266 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 01:00:34.336272 | orchestrator | 2026-04-05 01:00:34.336278 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 01:00:34.336285 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:07.350) 0:03:31.030 ********** 2026-04-05 01:00:34.336292 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336299 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336306 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336314 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.336322 | orchestrator | 2026-04-05 01:00:34.336329 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 01:00:34.336336 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:01.220) 0:03:32.250 ********** 2026-04-05 01:00:34.336342 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336350 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336358 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336365 | orchestrator | 2026-04-05 01:00:34.336372 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 01:00:34.336379 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:00.792) 0:03:33.042 ********** 2026-04-05 01:00:34.336392 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336400 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336407 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.336414 | orchestrator | 2026-04-05 01:00:34.336421 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 01:00:34.336429 | orchestrator | Sunday 05 April 2026 00:52:06 +0000 (0:00:01.589) 0:03:34.632 ********** 2026-04-05 01:00:34.336436 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.336443 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.336450 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.336457 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336464 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336471 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336479 | orchestrator | 2026-04-05 01:00:34.336486 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 01:00:34.336493 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:00.997) 0:03:35.629 ********** 2026-04-05 01:00:34.336500 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.336507 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.336514 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.336521 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336528 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336535 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336542 | orchestrator | 2026-04-05 01:00:34.336549 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 01:00:34.336556 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.819) 0:03:36.449 ********** 2026-04-05 01:00:34.336563 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336570 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336578 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336585 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336599 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336606 | orchestrator | 2026-04-05 01:00:34.336638 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 01:00:34.336646 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:01.026) 0:03:37.475 ********** 2026-04-05 01:00:34.336654 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336661 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336668 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336675 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336682 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336689 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336697 | orchestrator | 2026-04-05 01:00:34.336704 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 01:00:34.336711 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:00.689) 0:03:38.164 ********** 2026-04-05 01:00:34.336718 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336725 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336732 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336739 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336746 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336753 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336759 | orchestrator | 2026-04-05 01:00:34.336766 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 01:00:34.336773 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:01.158) 0:03:39.323 ********** 2026-04-05 01:00:34.336785 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336792 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336806 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336821 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336828 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336835 | orchestrator | 2026-04-05 01:00:34.336842 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 01:00:34.336849 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:00.689) 0:03:40.012 ********** 2026-04-05 01:00:34.336857 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336864 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336871 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336878 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336886 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336900 | orchestrator | 2026-04-05 01:00:34.336907 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 01:00:34.336915 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:00.846) 0:03:40.859 ********** 2026-04-05 01:00:34.336922 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.336929 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.336936 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.336943 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.336950 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.336958 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.336980 | orchestrator | 2026-04-05 01:00:34.336988 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 01:00:34.336995 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.655) 0:03:41.514 ********** 2026-04-05 01:00:34.337002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337017 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337024 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.337032 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.337039 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.337046 | orchestrator | 2026-04-05 01:00:34.337053 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 01:00:34.337060 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:02.690) 0:03:44.205 ********** 2026-04-05 01:00:34.337068 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.337075 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.337082 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.337089 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337097 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337104 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337111 | orchestrator | 2026-04-05 01:00:34.337118 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 01:00:34.337126 | orchestrator | Sunday 05 April 2026 00:52:17 +0000 (0:00:00.947) 0:03:45.153 ********** 2026-04-05 01:00:34.337133 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.337140 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.337147 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.337153 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337159 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337165 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337171 | orchestrator | 2026-04-05 01:00:34.337178 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 01:00:34.337184 | orchestrator | Sunday 05 April 2026 00:52:18 +0000 (0:00:00.834) 0:03:45.987 ********** 2026-04-05 01:00:34.337191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337206 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337213 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337231 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337238 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337245 | orchestrator | 2026-04-05 01:00:34.337252 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 01:00:34.337260 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:01.020) 0:03:47.008 ********** 2026-04-05 01:00:34.337267 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.337273 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.337279 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.337286 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337319 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337328 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337335 | orchestrator | 2026-04-05 01:00:34.337342 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 01:00:34.337349 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:00.706) 0:03:47.715 ********** 2026-04-05 01:00:34.337357 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-05 01:00:34.337368 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-05 01:00:34.337382 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337389 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-05 01:00:34.337397 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-05 01:00:34.337404 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-05 01:00:34.337411 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-05 01:00:34.337419 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337426 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337433 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337440 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337447 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337455 | orchestrator | 2026-04-05 01:00:34.337462 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 01:00:34.337469 | orchestrator | Sunday 05 April 2026 00:52:21 +0000 (0:00:01.285) 0:03:49.000 ********** 2026-04-05 01:00:34.337476 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337489 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337496 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337503 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337510 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337524 | orchestrator | 2026-04-05 01:00:34.337532 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 01:00:34.337539 | orchestrator | Sunday 05 April 2026 00:52:21 +0000 (0:00:00.798) 0:03:49.799 ********** 2026-04-05 01:00:34.337546 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337553 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337560 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337568 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337575 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337582 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337589 | orchestrator | 2026-04-05 01:00:34.337596 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:00:34.337604 | orchestrator | Sunday 05 April 2026 00:52:22 +0000 (0:00:00.946) 0:03:50.746 ********** 2026-04-05 01:00:34.337611 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337618 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337625 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337633 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337639 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337647 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337654 | orchestrator | 2026-04-05 01:00:34.337661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:00:34.337668 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:00.696) 0:03:51.443 ********** 2026-04-05 01:00:34.337675 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337682 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337689 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337697 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337704 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337711 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337718 | orchestrator | 2026-04-05 01:00:34.337725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:00:34.337755 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:00.913) 0:03:52.357 ********** 2026-04-05 01:00:34.337763 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337769 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.337777 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.337784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337791 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337798 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337805 | orchestrator | 2026-04-05 01:00:34.337812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:00:34.337819 | orchestrator | Sunday 05 April 2026 00:52:25 +0000 (0:00:00.711) 0:03:53.068 ********** 2026-04-05 01:00:34.337826 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.337834 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.337840 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.337848 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.337855 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.337862 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.337869 | orchestrator | 2026-04-05 01:00:34.337877 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:00:34.337884 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:00.907) 0:03:53.976 ********** 2026-04-05 01:00:34.337891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.337903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.337916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.337924 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337931 | orchestrator | 2026-04-05 01:00:34.337938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:00:34.337945 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:00.546) 0:03:54.522 ********** 2026-04-05 01:00:34.337952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.337959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.337980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.337987 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.337992 | orchestrator | 2026-04-05 01:00:34.337998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:00:34.338004 | orchestrator | Sunday 05 April 2026 00:52:27 +0000 (0:00:00.443) 0:03:54.966 ********** 2026-04-05 01:00:34.338012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.338048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.338055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.338063 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338069 | orchestrator | 2026-04-05 01:00:34.338077 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:00:34.338084 | orchestrator | Sunday 05 April 2026 00:52:27 +0000 (0:00:00.459) 0:03:55.425 ********** 2026-04-05 01:00:34.338092 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.338099 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.338106 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.338113 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.338120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.338127 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.338134 | orchestrator | 2026-04-05 01:00:34.338140 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:00:34.338147 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:00.915) 0:03:56.340 ********** 2026-04-05 01:00:34.338155 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:34.338162 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:34.338169 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:00:34.338176 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 01:00:34.338184 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.338191 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 01:00:34.338198 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.338205 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 01:00:34.338213 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.338220 | orchestrator | 2026-04-05 01:00:34.338227 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 01:00:34.338233 | orchestrator | Sunday 05 April 2026 00:52:30 +0000 (0:00:02.119) 0:03:58.460 ********** 2026-04-05 01:00:34.338239 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.338244 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.338250 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.338257 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.338263 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.338269 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.338275 | orchestrator | 2026-04-05 01:00:34.338281 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.338286 | orchestrator | Sunday 05 April 2026 00:52:33 +0000 (0:00:03.061) 0:04:01.521 ********** 2026-04-05 01:00:34.338293 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.338300 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.338308 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.338315 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.338322 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.338337 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.338343 | orchestrator | 2026-04-05 01:00:34.338350 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 01:00:34.338357 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:01.333) 0:04:02.855 ********** 2026-04-05 01:00:34.338364 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338371 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.338379 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.338386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.338394 | orchestrator | 2026-04-05 01:00:34.338401 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 01:00:34.338440 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:01.214) 0:04:04.069 ********** 2026-04-05 01:00:34.338448 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.338455 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.338462 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.338469 | orchestrator | 2026-04-05 01:00:34.338477 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 01:00:34.338484 | orchestrator | Sunday 05 April 2026 00:52:36 +0000 (0:00:00.412) 0:04:04.482 ********** 2026-04-05 01:00:34.338491 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.338498 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.338505 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.338513 | orchestrator | 2026-04-05 01:00:34.338520 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 01:00:34.338527 | orchestrator | Sunday 05 April 2026 00:52:38 +0000 (0:00:01.752) 0:04:06.234 ********** 2026-04-05 01:00:34.338534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:34.338541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:34.338548 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:34.338555 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.338562 | orchestrator | 2026-04-05 01:00:34.338569 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 01:00:34.338581 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:00.730) 0:04:06.964 ********** 2026-04-05 01:00:34.338588 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.338595 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.338602 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.338609 | orchestrator | 2026-04-05 01:00:34.338616 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 01:00:34.338624 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:00.428) 0:04:07.393 ********** 2026-04-05 01:00:34.338631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.338638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.338645 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.338652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.338660 | orchestrator | 2026-04-05 01:00:34.338667 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 01:00:34.338674 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:01.310) 0:04:08.704 ********** 2026-04-05 01:00:34.338681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.338688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.338696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.338703 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338710 | orchestrator | 2026-04-05 01:00:34.338716 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 01:00:34.338724 | orchestrator | Sunday 05 April 2026 00:52:41 +0000 (0:00:00.437) 0:04:09.142 ********** 2026-04-05 01:00:34.338731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338744 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.338751 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.338758 | orchestrator | 2026-04-05 01:00:34.338765 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 01:00:34.338772 | orchestrator | Sunday 05 April 2026 00:52:41 +0000 (0:00:00.394) 0:04:09.537 ********** 2026-04-05 01:00:34.338779 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338786 | orchestrator | 2026-04-05 01:00:34.338792 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 01:00:34.338798 | orchestrator | Sunday 05 April 2026 00:52:41 +0000 (0:00:00.286) 0:04:09.824 ********** 2026-04-05 01:00:34.338805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.338812 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338819 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.338826 | orchestrator | 2026-04-05 01:00:34.338833 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 01:00:34.338841 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:00.364) 0:04:10.188 ********** 2026-04-05 01:00:34.338848 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338855 | orchestrator | 2026-04-05 01:00:34.338862 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 01:00:34.338871 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:00.239) 0:04:10.427 ********** 2026-04-05 01:00:34.338880 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338888 | orchestrator | 2026-04-05 01:00:34.338897 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 01:00:34.338906 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.833) 0:04:11.261 ********** 2026-04-05 01:00:34.338913 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338922 | orchestrator | 2026-04-05 01:00:34.338930 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 01:00:34.338938 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.138) 0:04:11.400 ********** 2026-04-05 01:00:34.338946 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.338955 | orchestrator | 2026-04-05 01:00:34.338963 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 01:00:34.338984 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.241) 0:04:11.641 ********** 2026-04-05 01:00:34.338992 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339001 | orchestrator | 2026-04-05 01:00:34.339009 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 01:00:34.339017 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:00.248) 0:04:11.890 ********** 2026-04-05 01:00:34.339025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.339033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.339041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.339049 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339057 | orchestrator | 2026-04-05 01:00:34.339065 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 01:00:34.339099 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:00.524) 0:04:12.414 ********** 2026-04-05 01:00:34.339107 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.339123 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.339130 | orchestrator | 2026-04-05 01:00:34.339138 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 01:00:34.339146 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:00.381) 0:04:12.795 ********** 2026-04-05 01:00:34.339153 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339161 | orchestrator | 2026-04-05 01:00:34.339168 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 01:00:34.339177 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:00.236) 0:04:13.032 ********** 2026-04-05 01:00:34.339193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339201 | orchestrator | 2026-04-05 01:00:34.339208 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 01:00:34.339217 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:00.246) 0:04:13.279 ********** 2026-04-05 01:00:34.339225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.339232 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.339240 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.339253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.339261 | orchestrator | 2026-04-05 01:00:34.339268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 01:00:34.339274 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:01.134) 0:04:14.414 ********** 2026-04-05 01:00:34.339280 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.339286 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.339292 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.339299 | orchestrator | 2026-04-05 01:00:34.339307 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 01:00:34.339315 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:00.370) 0:04:14.785 ********** 2026-04-05 01:00:34.339323 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.339330 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.339336 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.339342 | orchestrator | 2026-04-05 01:00:34.339348 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 01:00:34.339353 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:01.191) 0:04:15.976 ********** 2026-04-05 01:00:34.339359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.339365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.339371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.339377 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339382 | orchestrator | 2026-04-05 01:00:34.339388 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 01:00:34.339394 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:01.265) 0:04:17.242 ********** 2026-04-05 01:00:34.339401 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.339407 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.339414 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.339420 | orchestrator | 2026-04-05 01:00:34.339426 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 01:00:34.339432 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:00.564) 0:04:17.807 ********** 2026-04-05 01:00:34.339437 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.339443 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.339449 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.339455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.339461 | orchestrator | 2026-04-05 01:00:34.339466 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 01:00:34.339472 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:01.246) 0:04:19.053 ********** 2026-04-05 01:00:34.339478 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.339485 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.339491 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.339497 | orchestrator | 2026-04-05 01:00:34.339502 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 01:00:34.339508 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:00.372) 0:04:19.425 ********** 2026-04-05 01:00:34.339514 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.339520 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.339526 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.339541 | orchestrator | 2026-04-05 01:00:34.339548 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 01:00:34.339554 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:01.221) 0:04:20.647 ********** 2026-04-05 01:00:34.339560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.339566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.339572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.339578 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339584 | orchestrator | 2026-04-05 01:00:34.339590 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 01:00:34.339596 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:00.892) 0:04:21.539 ********** 2026-04-05 01:00:34.339602 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.339608 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.339614 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.339620 | orchestrator | 2026-04-05 01:00:34.339626 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-05 01:00:34.339632 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:00.376) 0:04:21.916 ********** 2026-04-05 01:00:34.339639 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339645 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.339651 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.339657 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.339663 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.339721 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.339728 | orchestrator | 2026-04-05 01:00:34.339734 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 01:00:34.339740 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:00.988) 0:04:22.904 ********** 2026-04-05 01:00:34.339746 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.339752 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.339758 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.339764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.339770 | orchestrator | 2026-04-05 01:00:34.339776 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 01:00:34.339782 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:01.226) 0:04:24.131 ********** 2026-04-05 01:00:34.339788 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.339794 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.339800 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.339806 | orchestrator | 2026-04-05 01:00:34.339812 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 01:00:34.339818 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:00.402) 0:04:24.533 ********** 2026-04-05 01:00:34.339824 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.339840 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.339846 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.339852 | orchestrator | 2026-04-05 01:00:34.339859 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 01:00:34.339865 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:01.245) 0:04:25.779 ********** 2026-04-05 01:00:34.339870 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:34.339876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:34.339882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:34.339888 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.339893 | orchestrator | 2026-04-05 01:00:34.339899 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 01:00:34.339906 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:00.651) 0:04:26.430 ********** 2026-04-05 01:00:34.339912 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.339917 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.339931 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.339937 | orchestrator | 2026-04-05 01:00:34.339943 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-05 01:00:34.339949 | orchestrator | 2026-04-05 01:00:34.339955 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.339961 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:01.148) 0:04:27.578 ********** 2026-04-05 01:00:34.340022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.340029 | orchestrator | 2026-04-05 01:00:34.340035 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.340042 | orchestrator | Sunday 05 April 2026 00:53:00 +0000 (0:00:00.676) 0:04:28.255 ********** 2026-04-05 01:00:34.340049 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.340055 | orchestrator | 2026-04-05 01:00:34.340060 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.340066 | orchestrator | Sunday 05 April 2026 00:53:01 +0000 (0:00:00.634) 0:04:28.889 ********** 2026-04-05 01:00:34.340072 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340078 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340083 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340089 | orchestrator | 2026-04-05 01:00:34.340095 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.340101 | orchestrator | Sunday 05 April 2026 00:53:02 +0000 (0:00:01.426) 0:04:30.316 ********** 2026-04-05 01:00:34.340107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340113 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340119 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340125 | orchestrator | 2026-04-05 01:00:34.340131 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.340137 | orchestrator | Sunday 05 April 2026 00:53:02 +0000 (0:00:00.354) 0:04:30.671 ********** 2026-04-05 01:00:34.340143 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340149 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340155 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340161 | orchestrator | 2026-04-05 01:00:34.340167 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.340173 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:00.350) 0:04:31.021 ********** 2026-04-05 01:00:34.340180 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340185 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340192 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340198 | orchestrator | 2026-04-05 01:00:34.340204 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.340210 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:00.338) 0:04:31.359 ********** 2026-04-05 01:00:34.340216 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340222 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340228 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340234 | orchestrator | 2026-04-05 01:00:34.340240 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.340246 | orchestrator | Sunday 05 April 2026 00:53:04 +0000 (0:00:01.185) 0:04:32.544 ********** 2026-04-05 01:00:34.340253 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340259 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340265 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340271 | orchestrator | 2026-04-05 01:00:34.340276 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.340282 | orchestrator | Sunday 05 April 2026 00:53:05 +0000 (0:00:00.366) 0:04:32.911 ********** 2026-04-05 01:00:34.340339 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340348 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340366 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340372 | orchestrator | 2026-04-05 01:00:34.340378 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.340384 | orchestrator | Sunday 05 April 2026 00:53:05 +0000 (0:00:00.331) 0:04:33.242 ********** 2026-04-05 01:00:34.340390 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340396 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340402 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340408 | orchestrator | 2026-04-05 01:00:34.340414 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.340420 | orchestrator | Sunday 05 April 2026 00:53:06 +0000 (0:00:00.827) 0:04:34.069 ********** 2026-04-05 01:00:34.340427 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340433 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340439 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340444 | orchestrator | 2026-04-05 01:00:34.340450 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.340456 | orchestrator | Sunday 05 April 2026 00:53:07 +0000 (0:00:01.491) 0:04:35.561 ********** 2026-04-05 01:00:34.340461 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340473 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340479 | orchestrator | 2026-04-05 01:00:34.340486 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.340491 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.345) 0:04:35.906 ********** 2026-04-05 01:00:34.340497 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340503 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340509 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340515 | orchestrator | 2026-04-05 01:00:34.340522 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.340528 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.349) 0:04:36.256 ********** 2026-04-05 01:00:34.340534 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340541 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340546 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340553 | orchestrator | 2026-04-05 01:00:34.340559 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.340565 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:00.354) 0:04:36.610 ********** 2026-04-05 01:00:34.340571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340577 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340583 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340589 | orchestrator | 2026-04-05 01:00:34.340595 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.340602 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:00.358) 0:04:36.969 ********** 2026-04-05 01:00:34.340608 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340614 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340620 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340627 | orchestrator | 2026-04-05 01:00:34.340633 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.340639 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:00.650) 0:04:37.619 ********** 2026-04-05 01:00:34.340645 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340658 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340664 | orchestrator | 2026-04-05 01:00:34.340671 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.340677 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.327) 0:04:37.947 ********** 2026-04-05 01:00:34.340683 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.340689 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.340695 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.340710 | orchestrator | 2026-04-05 01:00:34.340716 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.340723 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.366) 0:04:38.314 ********** 2026-04-05 01:00:34.340729 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.340868 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.340879 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.340884 | orchestrator | 2026-04-05 01:00:34.340891 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.340897 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.539) 0:04:38.853 ********** 2026-04-05 01:00:34.341005 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341031 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341039 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341046 | orchestrator | 2026-04-05 01:00:34.341054 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.341061 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:00.777) 0:04:39.630 ********** 2026-04-05 01:00:34.341068 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341075 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341082 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341087 | orchestrator | 2026-04-05 01:00:34.341092 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:34.341098 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:00.557) 0:04:40.187 ********** 2026-04-05 01:00:34.341104 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341109 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341116 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341122 | orchestrator | 2026-04-05 01:00:34.341128 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 01:00:34.341135 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:00.347) 0:04:40.535 ********** 2026-04-05 01:00:34.341143 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-05 01:00:34.341151 | orchestrator | 2026-04-05 01:00:34.341158 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 01:00:34.341165 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.931) 0:04:41.467 ********** 2026-04-05 01:00:34.341172 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.341180 | orchestrator | 2026-04-05 01:00:34.341238 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 01:00:34.341245 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.158) 0:04:41.626 ********** 2026-04-05 01:00:34.341252 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 01:00:34.341259 | orchestrator | 2026-04-05 01:00:34.341266 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 01:00:34.341273 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:01.265) 0:04:42.891 ********** 2026-04-05 01:00:34.341279 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341284 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341290 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341296 | orchestrator | 2026-04-05 01:00:34.341304 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 01:00:34.341311 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.435) 0:04:43.327 ********** 2026-04-05 01:00:34.341318 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341325 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341332 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341339 | orchestrator | 2026-04-05 01:00:34.341346 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 01:00:34.341353 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.384) 0:04:43.711 ********** 2026-04-05 01:00:34.341361 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341374 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341382 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341398 | orchestrator | 2026-04-05 01:00:34.341406 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 01:00:34.341413 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:01.494) 0:04:45.206 ********** 2026-04-05 01:00:34.341421 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341428 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341435 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341442 | orchestrator | 2026-04-05 01:00:34.341449 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 01:00:34.341456 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.764) 0:04:45.971 ********** 2026-04-05 01:00:34.341463 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341471 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341478 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341485 | orchestrator | 2026-04-05 01:00:34.341492 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 01:00:34.341499 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.711) 0:04:46.682 ********** 2026-04-05 01:00:34.341507 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341514 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341521 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341528 | orchestrator | 2026-04-05 01:00:34.341535 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 01:00:34.341543 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:00.673) 0:04:47.356 ********** 2026-04-05 01:00:34.341549 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341555 | orchestrator | 2026-04-05 01:00:34.341560 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 01:00:34.341566 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:01.249) 0:04:48.605 ********** 2026-04-05 01:00:34.341572 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341577 | orchestrator | 2026-04-05 01:00:34.341584 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 01:00:34.341590 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.118) 0:04:49.724 ********** 2026-04-05 01:00:34.341596 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.341601 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.341608 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.341614 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:34.341621 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 01:00:34.341629 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:34.341636 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:34.341642 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-05 01:00:34.341650 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:34.341657 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-05 01:00:34.341664 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 01:00:34.341671 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-05 01:00:34.341679 | orchestrator | 2026-04-05 01:00:34.341686 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 01:00:34.341693 | orchestrator | Sunday 05 April 2026 00:53:25 +0000 (0:00:04.082) 0:04:53.806 ********** 2026-04-05 01:00:34.341700 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341707 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341714 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341721 | orchestrator | 2026-04-05 01:00:34.341728 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 01:00:34.341736 | orchestrator | Sunday 05 April 2026 00:53:27 +0000 (0:00:01.915) 0:04:55.721 ********** 2026-04-05 01:00:34.341743 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341755 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341763 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341770 | orchestrator | 2026-04-05 01:00:34.341777 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 01:00:34.341784 | orchestrator | Sunday 05 April 2026 00:53:28 +0000 (0:00:00.679) 0:04:56.400 ********** 2026-04-05 01:00:34.341791 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.341798 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.341805 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.341812 | orchestrator | 2026-04-05 01:00:34.341819 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 01:00:34.341827 | orchestrator | Sunday 05 April 2026 00:53:28 +0000 (0:00:00.429) 0:04:56.829 ********** 2026-04-05 01:00:34.341834 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341867 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341875 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341883 | orchestrator | 2026-04-05 01:00:34.341890 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 01:00:34.341897 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:02.686) 0:04:59.516 ********** 2026-04-05 01:00:34.341904 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.341912 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.341919 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.341926 | orchestrator | 2026-04-05 01:00:34.341933 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 01:00:34.341940 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.704) 0:05:01.220 ********** 2026-04-05 01:00:34.341947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.341954 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.341961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.341989 | orchestrator | 2026-04-05 01:00:34.341996 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 01:00:34.342003 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:00.376) 0:05:01.597 ********** 2026-04-05 01:00:34.342052 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342062 | orchestrator | 2026-04-05 01:00:34.342069 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:34.342076 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:00.912) 0:05:02.510 ********** 2026-04-05 01:00:34.342083 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342090 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.342097 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.342105 | orchestrator | 2026-04-05 01:00:34.342112 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 01:00:34.342119 | orchestrator | Sunday 05 April 2026 00:53:34 +0000 (0:00:00.347) 0:05:02.858 ********** 2026-04-05 01:00:34.342126 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342133 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.342140 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.342146 | orchestrator | 2026-04-05 01:00:34.342152 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:34.342158 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:00.366) 0:05:03.224 ********** 2026-04-05 01:00:34.342164 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342172 | orchestrator | 2026-04-05 01:00:34.342179 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 01:00:34.342186 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:00.539) 0:05:03.764 ********** 2026-04-05 01:00:34.342193 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.342200 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.342207 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.342214 | orchestrator | 2026-04-05 01:00:34.342221 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 01:00:34.342238 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:02.105) 0:05:05.869 ********** 2026-04-05 01:00:34.342245 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.342253 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.342260 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.342267 | orchestrator | 2026-04-05 01:00:34.342273 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 01:00:34.342279 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:01.524) 0:05:07.394 ********** 2026-04-05 01:00:34.342286 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.342293 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.342301 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.342307 | orchestrator | 2026-04-05 01:00:34.342315 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 01:00:34.342322 | orchestrator | Sunday 05 April 2026 00:53:41 +0000 (0:00:02.189) 0:05:09.583 ********** 2026-04-05 01:00:34.342330 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.342337 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.342344 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.342351 | orchestrator | 2026-04-05 01:00:34.342358 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 01:00:34.342365 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:02.350) 0:05:11.934 ********** 2026-04-05 01:00:34.342373 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342380 | orchestrator | 2026-04-05 01:00:34.342387 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 01:00:34.342394 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:00.691) 0:05:12.625 ********** 2026-04-05 01:00:34.342402 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-05 01:00:34.342409 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.342416 | orchestrator | 2026-04-05 01:00:34.342423 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 01:00:34.342431 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:21.843) 0:05:34.469 ********** 2026-04-05 01:00:34.342438 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.342445 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.342452 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.342460 | orchestrator | 2026-04-05 01:00:34.342467 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 01:00:34.342474 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:09.383) 0:05:43.852 ********** 2026-04-05 01:00:34.342481 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342488 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.342495 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.342503 | orchestrator | 2026-04-05 01:00:34.342510 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 01:00:34.342542 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:00.306) 0:05:44.159 ********** 2026-04-05 01:00:34.342552 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 01:00:34.342561 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 01:00:34.342579 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 01:00:34.342589 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 01:00:34.342596 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 01:00:34.342604 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__879837f20dccbdfb772bf99402f2e1ae1d512caa'}])  2026-04-05 01:00:34.342613 | orchestrator | 2026-04-05 01:00:34.342620 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.342627 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:15.209) 0:05:59.368 ********** 2026-04-05 01:00:34.342634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342641 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.342648 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.342654 | orchestrator | 2026-04-05 01:00:34.342660 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 01:00:34.342666 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:00.345) 0:05:59.714 ********** 2026-04-05 01:00:34.342672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342679 | orchestrator | 2026-04-05 01:00:34.342685 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 01:00:34.342691 | orchestrator | Sunday 05 April 2026 00:54:32 +0000 (0:00:00.806) 0:06:00.520 ********** 2026-04-05 01:00:34.342697 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.342705 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.342712 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.342718 | orchestrator | 2026-04-05 01:00:34.342725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 01:00:34.342731 | orchestrator | Sunday 05 April 2026 00:54:32 +0000 (0:00:00.334) 0:06:00.854 ********** 2026-04-05 01:00:34.342738 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342746 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.342752 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.342759 | orchestrator | 2026-04-05 01:00:34.342766 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 01:00:34.342773 | orchestrator | Sunday 05 April 2026 00:54:33 +0000 (0:00:00.339) 0:06:01.194 ********** 2026-04-05 01:00:34.342780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:34.342787 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:34.342794 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:34.342801 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.342808 | orchestrator | 2026-04-05 01:00:34.342815 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 01:00:34.342827 | orchestrator | Sunday 05 April 2026 00:54:34 +0000 (0:00:00.878) 0:06:02.072 ********** 2026-04-05 01:00:34.342834 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.342841 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.342869 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.342877 | orchestrator | 2026-04-05 01:00:34.342884 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-05 01:00:34.342891 | orchestrator | 2026-04-05 01:00:34.342898 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.342905 | orchestrator | Sunday 05 April 2026 00:54:35 +0000 (0:00:00.994) 0:06:03.067 ********** 2026-04-05 01:00:34.342912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342919 | orchestrator | 2026-04-05 01:00:34.342926 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.342933 | orchestrator | Sunday 05 April 2026 00:54:35 +0000 (0:00:00.628) 0:06:03.695 ********** 2026-04-05 01:00:34.342940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.342947 | orchestrator | 2026-04-05 01:00:34.342954 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.342962 | orchestrator | Sunday 05 April 2026 00:54:36 +0000 (0:00:00.799) 0:06:04.495 ********** 2026-04-05 01:00:34.342988 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.342995 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343000 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343006 | orchestrator | 2026-04-05 01:00:34.343012 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.343019 | orchestrator | Sunday 05 April 2026 00:54:37 +0000 (0:00:00.778) 0:06:05.273 ********** 2026-04-05 01:00:34.343026 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343033 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343040 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343047 | orchestrator | 2026-04-05 01:00:34.343054 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.343062 | orchestrator | Sunday 05 April 2026 00:54:37 +0000 (0:00:00.310) 0:06:05.584 ********** 2026-04-05 01:00:34.343069 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343083 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343090 | orchestrator | 2026-04-05 01:00:34.343097 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.343104 | orchestrator | Sunday 05 April 2026 00:54:38 +0000 (0:00:00.341) 0:06:05.925 ********** 2026-04-05 01:00:34.343112 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343119 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343126 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343133 | orchestrator | 2026-04-05 01:00:34.343140 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.343147 | orchestrator | Sunday 05 April 2026 00:54:38 +0000 (0:00:00.651) 0:06:06.577 ********** 2026-04-05 01:00:34.343155 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343162 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343169 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343176 | orchestrator | 2026-04-05 01:00:34.343183 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.343190 | orchestrator | Sunday 05 April 2026 00:54:39 +0000 (0:00:00.784) 0:06:07.361 ********** 2026-04-05 01:00:34.343197 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343219 | orchestrator | 2026-04-05 01:00:34.343226 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.343238 | orchestrator | Sunday 05 April 2026 00:54:39 +0000 (0:00:00.304) 0:06:07.665 ********** 2026-04-05 01:00:34.343245 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343252 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343259 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343266 | orchestrator | 2026-04-05 01:00:34.343272 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.343278 | orchestrator | Sunday 05 April 2026 00:54:40 +0000 (0:00:00.382) 0:06:08.047 ********** 2026-04-05 01:00:34.343286 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343293 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343300 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343307 | orchestrator | 2026-04-05 01:00:34.343314 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.343321 | orchestrator | Sunday 05 April 2026 00:54:40 +0000 (0:00:00.723) 0:06:08.771 ********** 2026-04-05 01:00:34.343328 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343334 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343339 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343346 | orchestrator | 2026-04-05 01:00:34.343353 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.343360 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:01.207) 0:06:09.978 ********** 2026-04-05 01:00:34.343367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343374 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343381 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343387 | orchestrator | 2026-04-05 01:00:34.343394 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.343399 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:00.381) 0:06:10.360 ********** 2026-04-05 01:00:34.343405 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343411 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343417 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343423 | orchestrator | 2026-04-05 01:00:34.343429 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.343435 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:00.329) 0:06:10.689 ********** 2026-04-05 01:00:34.343442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343456 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343462 | orchestrator | 2026-04-05 01:00:34.343469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.343505 | orchestrator | Sunday 05 April 2026 00:54:43 +0000 (0:00:00.306) 0:06:10.995 ********** 2026-04-05 01:00:34.343512 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343526 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343533 | orchestrator | 2026-04-05 01:00:34.343540 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.343547 | orchestrator | Sunday 05 April 2026 00:54:43 +0000 (0:00:00.591) 0:06:11.587 ********** 2026-04-05 01:00:34.343554 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343561 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343569 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343576 | orchestrator | 2026-04-05 01:00:34.343583 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.343590 | orchestrator | Sunday 05 April 2026 00:54:44 +0000 (0:00:00.331) 0:06:11.918 ********** 2026-04-05 01:00:34.343597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343611 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343619 | orchestrator | 2026-04-05 01:00:34.343626 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.343633 | orchestrator | Sunday 05 April 2026 00:54:44 +0000 (0:00:00.316) 0:06:12.235 ********** 2026-04-05 01:00:34.343646 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343658 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343665 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343672 | orchestrator | 2026-04-05 01:00:34.343680 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.343687 | orchestrator | Sunday 05 April 2026 00:54:44 +0000 (0:00:00.291) 0:06:12.526 ********** 2026-04-05 01:00:34.343694 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343701 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343708 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343715 | orchestrator | 2026-04-05 01:00:34.343723 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.343730 | orchestrator | Sunday 05 April 2026 00:54:45 +0000 (0:00:00.612) 0:06:13.138 ********** 2026-04-05 01:00:34.343736 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343742 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343748 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343754 | orchestrator | 2026-04-05 01:00:34.343760 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.343766 | orchestrator | Sunday 05 April 2026 00:54:45 +0000 (0:00:00.359) 0:06:13.498 ********** 2026-04-05 01:00:34.343773 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.343778 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.343785 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.343792 | orchestrator | 2026-04-05 01:00:34.343799 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:34.343805 | orchestrator | Sunday 05 April 2026 00:54:46 +0000 (0:00:00.531) 0:06:14.030 ********** 2026-04-05 01:00:34.343812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:00:34.343820 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.343827 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.343834 | orchestrator | 2026-04-05 01:00:34.343841 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 01:00:34.343849 | orchestrator | Sunday 05 April 2026 00:54:47 +0000 (0:00:00.878) 0:06:14.908 ********** 2026-04-05 01:00:34.343856 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.343864 | orchestrator | 2026-04-05 01:00:34.343871 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 01:00:34.343878 | orchestrator | Sunday 05 April 2026 00:54:47 +0000 (0:00:00.626) 0:06:15.534 ********** 2026-04-05 01:00:34.343886 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.343893 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.343900 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.343907 | orchestrator | 2026-04-05 01:00:34.343915 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 01:00:34.343922 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:00.653) 0:06:16.188 ********** 2026-04-05 01:00:34.343929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.343936 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.343943 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.343950 | orchestrator | 2026-04-05 01:00:34.343957 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 01:00:34.343964 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:00.326) 0:06:16.514 ********** 2026-04-05 01:00:34.344022 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.344030 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.344037 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.344044 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-05 01:00:34.344051 | orchestrator | 2026-04-05 01:00:34.344059 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 01:00:34.344072 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:10.595) 0:06:27.110 ********** 2026-04-05 01:00:34.344080 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.344086 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.344092 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.344097 | orchestrator | 2026-04-05 01:00:34.344103 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 01:00:34.344109 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:00.590) 0:06:27.700 ********** 2026-04-05 01:00:34.344117 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:00:34.344124 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:00:34.344131 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:00:34.344138 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.344145 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.344181 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.344190 | orchestrator | 2026-04-05 01:00:34.344197 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:34.344204 | orchestrator | Sunday 05 April 2026 00:55:02 +0000 (0:00:02.527) 0:06:30.227 ********** 2026-04-05 01:00:34.344211 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:00:34.344218 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:00:34.344225 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:00:34.344232 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:00:34.344239 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 01:00:34.344288 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 01:00:34.344295 | orchestrator | 2026-04-05 01:00:34.344302 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 01:00:34.344309 | orchestrator | Sunday 05 April 2026 00:55:03 +0000 (0:00:01.198) 0:06:31.426 ********** 2026-04-05 01:00:34.344316 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.344324 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.344331 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.344338 | orchestrator | 2026-04-05 01:00:34.344345 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 01:00:34.344362 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:00.666) 0:06:32.092 ********** 2026-04-05 01:00:34.344370 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.344377 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.344384 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.344391 | orchestrator | 2026-04-05 01:00:34.344398 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 01:00:34.344405 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:00.648) 0:06:32.741 ********** 2026-04-05 01:00:34.344412 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.344419 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.344427 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.344434 | orchestrator | 2026-04-05 01:00:34.344440 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 01:00:34.344448 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:00.316) 0:06:33.058 ********** 2026-04-05 01:00:34.344455 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.344462 | orchestrator | 2026-04-05 01:00:34.344469 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:34.344476 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:00.519) 0:06:33.577 ********** 2026-04-05 01:00:34.344484 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.344490 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.344497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.344511 | orchestrator | 2026-04-05 01:00:34.344518 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 01:00:34.344525 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:00.306) 0:06:33.884 ********** 2026-04-05 01:00:34.344532 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.344539 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.344546 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.344553 | orchestrator | 2026-04-05 01:00:34.344560 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:34.344567 | orchestrator | Sunday 05 April 2026 00:55:06 +0000 (0:00:00.643) 0:06:34.528 ********** 2026-04-05 01:00:34.344574 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.344582 | orchestrator | 2026-04-05 01:00:34.344588 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 01:00:34.344595 | orchestrator | Sunday 05 April 2026 00:55:07 +0000 (0:00:00.514) 0:06:35.043 ********** 2026-04-05 01:00:34.344603 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.344610 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.344617 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.344624 | orchestrator | 2026-04-05 01:00:34.344631 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 01:00:34.344638 | orchestrator | Sunday 05 April 2026 00:55:08 +0000 (0:00:01.217) 0:06:36.260 ********** 2026-04-05 01:00:34.344645 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.344653 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.344660 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.344667 | orchestrator | 2026-04-05 01:00:34.344674 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 01:00:34.344681 | orchestrator | Sunday 05 April 2026 00:55:09 +0000 (0:00:01.313) 0:06:37.574 ********** 2026-04-05 01:00:34.344688 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.344709 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.344716 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.344723 | orchestrator | 2026-04-05 01:00:34.344731 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 01:00:34.344738 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:01.783) 0:06:39.358 ********** 2026-04-05 01:00:34.344744 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.344750 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.344756 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.344763 | orchestrator | 2026-04-05 01:00:34.344770 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 01:00:34.344777 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:01.910) 0:06:41.268 ********** 2026-04-05 01:00:34.344784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.344792 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.344799 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-05 01:00:34.344806 | orchestrator | 2026-04-05 01:00:34.344813 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-05 01:00:34.344821 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:00.437) 0:06:41.706 ********** 2026-04-05 01:00:34.344854 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-05 01:00:34.344863 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-05 01:00:34.344869 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-05 01:00:34.344875 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-05 01:00:34.344882 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-05 01:00:34.344889 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.344902 | orchestrator | 2026-04-05 01:00:34.344909 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-05 01:00:34.344915 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:30.480) 0:07:12.187 ********** 2026-04-05 01:00:34.344920 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.344927 | orchestrator | 2026-04-05 01:00:34.344932 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-05 01:00:34.344943 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:02.000) 0:07:14.188 ********** 2026-04-05 01:00:34.344950 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.344957 | orchestrator | 2026-04-05 01:00:34.344964 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-05 01:00:34.344986 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:00.383) 0:07:14.571 ********** 2026-04-05 01:00:34.344992 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.344998 | orchestrator | 2026-04-05 01:00:34.345003 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-05 01:00:34.345009 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:00.178) 0:07:14.750 ********** 2026-04-05 01:00:34.345016 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-05 01:00:34.345023 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-05 01:00:34.345030 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-05 01:00:34.345037 | orchestrator | 2026-04-05 01:00:34.345044 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-05 01:00:34.345051 | orchestrator | Sunday 05 April 2026 00:55:53 +0000 (0:00:06.579) 0:07:21.329 ********** 2026-04-05 01:00:34.345058 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-05 01:00:34.345066 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-05 01:00:34.345073 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-05 01:00:34.345080 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-05 01:00:34.345087 | orchestrator | 2026-04-05 01:00:34.345094 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.345101 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:04.819) 0:07:26.148 ********** 2026-04-05 01:00:34.345107 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.345113 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.345119 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.345127 | orchestrator | 2026-04-05 01:00:34.345134 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 01:00:34.345140 | orchestrator | Sunday 05 April 2026 00:55:59 +0000 (0:00:00.995) 0:07:27.143 ********** 2026-04-05 01:00:34.345146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.345153 | orchestrator | 2026-04-05 01:00:34.345160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 01:00:34.345167 | orchestrator | Sunday 05 April 2026 00:55:59 +0000 (0:00:00.557) 0:07:27.700 ********** 2026-04-05 01:00:34.345175 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.345182 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.345189 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.345196 | orchestrator | 2026-04-05 01:00:34.345203 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 01:00:34.345210 | orchestrator | Sunday 05 April 2026 00:56:00 +0000 (0:00:00.311) 0:07:28.012 ********** 2026-04-05 01:00:34.345217 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.345224 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.345231 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.345238 | orchestrator | 2026-04-05 01:00:34.345245 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 01:00:34.345259 | orchestrator | Sunday 05 April 2026 00:56:01 +0000 (0:00:01.691) 0:07:29.703 ********** 2026-04-05 01:00:34.345266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 01:00:34.345272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 01:00:34.345278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 01:00:34.345284 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.345291 | orchestrator | 2026-04-05 01:00:34.345299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 01:00:34.345306 | orchestrator | Sunday 05 April 2026 00:56:02 +0000 (0:00:00.646) 0:07:30.350 ********** 2026-04-05 01:00:34.345313 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.345320 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.345327 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.345334 | orchestrator | 2026-04-05 01:00:34.345341 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-05 01:00:34.345348 | orchestrator | 2026-04-05 01:00:34.345355 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.345363 | orchestrator | Sunday 05 April 2026 00:56:03 +0000 (0:00:00.686) 0:07:31.037 ********** 2026-04-05 01:00:34.345420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.345429 | orchestrator | 2026-04-05 01:00:34.345437 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.345444 | orchestrator | Sunday 05 April 2026 00:56:03 +0000 (0:00:00.624) 0:07:31.661 ********** 2026-04-05 01:00:34.345451 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.345458 | orchestrator | 2026-04-05 01:00:34.345465 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.345472 | orchestrator | Sunday 05 April 2026 00:56:04 +0000 (0:00:00.460) 0:07:32.122 ********** 2026-04-05 01:00:34.345479 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345514 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345522 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345529 | orchestrator | 2026-04-05 01:00:34.345536 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.345543 | orchestrator | Sunday 05 April 2026 00:56:04 +0000 (0:00:00.277) 0:07:32.399 ********** 2026-04-05 01:00:34.345550 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345557 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345569 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.345576 | orchestrator | 2026-04-05 01:00:34.345583 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.345590 | orchestrator | Sunday 05 April 2026 00:56:05 +0000 (0:00:00.884) 0:07:33.284 ********** 2026-04-05 01:00:34.345597 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345604 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345610 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.345616 | orchestrator | 2026-04-05 01:00:34.345622 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.345628 | orchestrator | Sunday 05 April 2026 00:56:06 +0000 (0:00:00.714) 0:07:33.999 ********** 2026-04-05 01:00:34.345635 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345642 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345649 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.345656 | orchestrator | 2026-04-05 01:00:34.345663 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.345669 | orchestrator | Sunday 05 April 2026 00:56:06 +0000 (0:00:00.625) 0:07:34.624 ********** 2026-04-05 01:00:34.345676 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345681 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345688 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345701 | orchestrator | 2026-04-05 01:00:34.345708 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.345715 | orchestrator | Sunday 05 April 2026 00:56:07 +0000 (0:00:00.306) 0:07:34.931 ********** 2026-04-05 01:00:34.345723 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345730 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345737 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345744 | orchestrator | 2026-04-05 01:00:34.345751 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.345758 | orchestrator | Sunday 05 April 2026 00:56:07 +0000 (0:00:00.466) 0:07:35.397 ********** 2026-04-05 01:00:34.345765 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345779 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345787 | orchestrator | 2026-04-05 01:00:34.345794 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.345801 | orchestrator | Sunday 05 April 2026 00:56:07 +0000 (0:00:00.277) 0:07:35.675 ********** 2026-04-05 01:00:34.345808 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345815 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345822 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.345829 | orchestrator | 2026-04-05 01:00:34.345837 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.345844 | orchestrator | Sunday 05 April 2026 00:56:08 +0000 (0:00:00.630) 0:07:36.306 ********** 2026-04-05 01:00:34.345851 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345858 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345865 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.345872 | orchestrator | 2026-04-05 01:00:34.345878 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.345885 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:00.724) 0:07:37.030 ********** 2026-04-05 01:00:34.345891 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345897 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345904 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345912 | orchestrator | 2026-04-05 01:00:34.345919 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.345924 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:00.448) 0:07:37.479 ********** 2026-04-05 01:00:34.345930 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.345936 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.345942 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.345947 | orchestrator | 2026-04-05 01:00:34.345954 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.345960 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:00.292) 0:07:37.771 ********** 2026-04-05 01:00:34.345980 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.345987 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.345993 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346000 | orchestrator | 2026-04-05 01:00:34.346007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.346039 | orchestrator | Sunday 05 April 2026 00:56:10 +0000 (0:00:00.366) 0:07:38.138 ********** 2026-04-05 01:00:34.346048 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346055 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346062 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346069 | orchestrator | 2026-04-05 01:00:34.346076 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.346083 | orchestrator | Sunday 05 April 2026 00:56:10 +0000 (0:00:00.318) 0:07:38.457 ********** 2026-04-05 01:00:34.346091 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346098 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346110 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346117 | orchestrator | 2026-04-05 01:00:34.346123 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.346136 | orchestrator | Sunday 05 April 2026 00:56:11 +0000 (0:00:00.555) 0:07:39.013 ********** 2026-04-05 01:00:34.346143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346149 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346155 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346162 | orchestrator | 2026-04-05 01:00:34.346169 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.346176 | orchestrator | Sunday 05 April 2026 00:56:11 +0000 (0:00:00.320) 0:07:39.333 ********** 2026-04-05 01:00:34.346183 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346190 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346204 | orchestrator | 2026-04-05 01:00:34.346211 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.346218 | orchestrator | Sunday 05 April 2026 00:56:11 +0000 (0:00:00.326) 0:07:39.660 ********** 2026-04-05 01:00:34.346224 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346230 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346237 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346244 | orchestrator | 2026-04-05 01:00:34.346256 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.346263 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:00.313) 0:07:39.973 ********** 2026-04-05 01:00:34.346270 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346276 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346282 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346289 | orchestrator | 2026-04-05 01:00:34.346296 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.346304 | orchestrator | Sunday 05 April 2026 00:56:12 +0000 (0:00:00.765) 0:07:40.739 ********** 2026-04-05 01:00:34.346311 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346318 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346325 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346333 | orchestrator | 2026-04-05 01:00:34.346340 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 01:00:34.346347 | orchestrator | Sunday 05 April 2026 00:56:13 +0000 (0:00:00.710) 0:07:41.449 ********** 2026-04-05 01:00:34.346354 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346361 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346368 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346375 | orchestrator | 2026-04-05 01:00:34.346383 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 01:00:34.346390 | orchestrator | Sunday 05 April 2026 00:56:13 +0000 (0:00:00.339) 0:07:41.789 ********** 2026-04-05 01:00:34.346397 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:34.346404 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:34.346412 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:34.346419 | orchestrator | 2026-04-05 01:00:34.346426 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 01:00:34.346433 | orchestrator | Sunday 05 April 2026 00:56:14 +0000 (0:00:00.967) 0:07:42.757 ********** 2026-04-05 01:00:34.346441 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.346448 | orchestrator | 2026-04-05 01:00:34.346455 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 01:00:34.346462 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:00.848) 0:07:43.605 ********** 2026-04-05 01:00:34.346470 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346477 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346484 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346491 | orchestrator | 2026-04-05 01:00:34.346499 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 01:00:34.346510 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:00.368) 0:07:43.974 ********** 2026-04-05 01:00:34.346517 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346525 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346532 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346539 | orchestrator | 2026-04-05 01:00:34.346546 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 01:00:34.346553 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:00.383) 0:07:44.357 ********** 2026-04-05 01:00:34.346561 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346568 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346575 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346583 | orchestrator | 2026-04-05 01:00:34.346590 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 01:00:34.346597 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.966) 0:07:45.324 ********** 2026-04-05 01:00:34.346604 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.346612 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.346619 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.346626 | orchestrator | 2026-04-05 01:00:34.346633 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 01:00:34.346640 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.386) 0:07:45.711 ********** 2026-04-05 01:00:34.346648 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:34.346656 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:34.346663 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:34.346670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:34.346677 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:34.346693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:34.346700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 01:00:34.346707 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:34.346715 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:34.346722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 01:00:34.346729 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:34.346736 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 01:00:34.346743 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:34.346751 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 01:00:34.346761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 01:00:34.346768 | orchestrator | 2026-04-05 01:00:34.346775 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 01:00:34.346783 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:04.233) 0:07:49.944 ********** 2026-04-05 01:00:34.346790 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.346797 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.346804 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.346811 | orchestrator | 2026-04-05 01:00:34.346819 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 01:00:34.346826 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.298) 0:07:50.243 ********** 2026-04-05 01:00:34.346833 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.346845 | orchestrator | 2026-04-05 01:00:34.346853 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 01:00:34.346860 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.791) 0:07:51.035 ********** 2026-04-05 01:00:34.346867 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:34.346874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:34.346881 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 01:00:34.346892 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:34.346900 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:34.346907 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-05 01:00:34.346914 | orchestrator | 2026-04-05 01:00:34.346921 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 01:00:34.346928 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.890) 0:07:51.926 ********** 2026-04-05 01:00:34.346936 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.346943 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.346950 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.346957 | orchestrator | 2026-04-05 01:00:34.346964 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:34.347008 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:02.007) 0:07:53.933 ********** 2026-04-05 01:00:34.347015 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:34.347022 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.347028 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.347034 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:34.347040 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:34.347047 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.347053 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:34.347059 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:34.347066 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.347074 | orchestrator | 2026-04-05 01:00:34.347081 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 01:00:34.347088 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:01.220) 0:07:55.154 ********** 2026-04-05 01:00:34.347095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.347102 | orchestrator | 2026-04-05 01:00:34.347109 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 01:00:34.347116 | orchestrator | Sunday 05 April 2026 00:56:29 +0000 (0:00:02.429) 0:07:57.584 ********** 2026-04-05 01:00:34.347123 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.347131 | orchestrator | 2026-04-05 01:00:34.347138 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-05 01:00:34.347145 | orchestrator | Sunday 05 April 2026 00:56:30 +0000 (0:00:00.493) 0:07:58.078 ********** 2026-04-05 01:00:34.347152 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f6b2ea8b-e42f-5ec6-b7af-dc106d037603', 'data_vg': 'ceph-f6b2ea8b-e42f-5ec6-b7af-dc106d037603'}) 2026-04-05 01:00:34.347161 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff', 'data_vg': 'ceph-2cc0fb6a-bf3f-5a25-9286-a8c7d7ff4bff'}) 2026-04-05 01:00:34.347174 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-157b1f80-825d-547a-87b1-b4c204357e87', 'data_vg': 'ceph-157b1f80-825d-547a-87b1-b4c204357e87'}) 2026-04-05 01:00:34.347181 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ecfcc343-98df-5597-aad3-97c87b883418', 'data_vg': 'ceph-ecfcc343-98df-5597-aad3-97c87b883418'}) 2026-04-05 01:00:34.347197 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4b90bbc-8b4b-55ca-a382-2d9a937d0621', 'data_vg': 'ceph-e4b90bbc-8b4b-55ca-a382-2d9a937d0621'}) 2026-04-05 01:00:34.347204 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6d430e-d9c3-5542-869b-9d02c8b92670', 'data_vg': 'ceph-9b6d430e-d9c3-5542-869b-9d02c8b92670'}) 2026-04-05 01:00:34.347211 | orchestrator | 2026-04-05 01:00:34.347218 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 01:00:34.347226 | orchestrator | Sunday 05 April 2026 00:57:10 +0000 (0:00:40.756) 0:08:38.834 ********** 2026-04-05 01:00:34.347233 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347240 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347247 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.347254 | orchestrator | 2026-04-05 01:00:34.347261 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 01:00:34.347272 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:00.837) 0:08:39.672 ********** 2026-04-05 01:00:34.347278 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.347285 | orchestrator | 2026-04-05 01:00:34.347293 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 01:00:34.347300 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.649) 0:08:40.321 ********** 2026-04-05 01:00:34.347307 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.347314 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.347321 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.347328 | orchestrator | 2026-04-05 01:00:34.347335 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 01:00:34.347342 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:00.691) 0:08:41.013 ********** 2026-04-05 01:00:34.347350 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.347357 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.347364 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.347371 | orchestrator | 2026-04-05 01:00:34.347378 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:34.347386 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:02.882) 0:08:43.895 ********** 2026-04-05 01:00:34.347393 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.347400 | orchestrator | 2026-04-05 01:00:34.347407 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 01:00:34.347414 | orchestrator | Sunday 05 April 2026 00:57:16 +0000 (0:00:00.514) 0:08:44.410 ********** 2026-04-05 01:00:34.347422 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.347429 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.347436 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.347444 | orchestrator | 2026-04-05 01:00:34.347451 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 01:00:34.347458 | orchestrator | Sunday 05 April 2026 00:57:17 +0000 (0:00:01.261) 0:08:45.672 ********** 2026-04-05 01:00:34.347465 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.347472 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.347479 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.347486 | orchestrator | 2026-04-05 01:00:34.347494 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 01:00:34.347501 | orchestrator | Sunday 05 April 2026 00:57:19 +0000 (0:00:01.409) 0:08:47.081 ********** 2026-04-05 01:00:34.347508 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.347515 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.347522 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.347529 | orchestrator | 2026-04-05 01:00:34.347537 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 01:00:34.347544 | orchestrator | Sunday 05 April 2026 00:57:20 +0000 (0:00:01.664) 0:08:48.746 ********** 2026-04-05 01:00:34.347556 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347563 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347571 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.347578 | orchestrator | 2026-04-05 01:00:34.347585 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 01:00:34.347592 | orchestrator | Sunday 05 April 2026 00:57:21 +0000 (0:00:00.328) 0:08:49.074 ********** 2026-04-05 01:00:34.347599 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347607 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347614 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.347621 | orchestrator | 2026-04-05 01:00:34.347629 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 01:00:34.347636 | orchestrator | Sunday 05 April 2026 00:57:21 +0000 (0:00:00.351) 0:08:49.426 ********** 2026-04-05 01:00:34.347643 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-04-05 01:00:34.347650 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-05 01:00:34.347658 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-05 01:00:34.347665 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-05 01:00:34.347672 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:34.347679 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-05 01:00:34.347687 | orchestrator | 2026-04-05 01:00:34.347694 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 01:00:34.347701 | orchestrator | Sunday 05 April 2026 00:57:22 +0000 (0:00:01.299) 0:08:50.725 ********** 2026-04-05 01:00:34.347708 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-04-05 01:00:34.347715 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-05 01:00:34.347722 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-05 01:00:34.347730 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-05 01:00:34.347737 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 01:00:34.347749 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-05 01:00:34.347756 | orchestrator | 2026-04-05 01:00:34.347763 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 01:00:34.347771 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:02.369) 0:08:53.095 ********** 2026-04-05 01:00:34.347778 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-05 01:00:34.347785 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-04-05 01:00:34.347792 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-05 01:00:34.347799 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 01:00:34.347807 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-05 01:00:34.347814 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-05 01:00:34.347821 | orchestrator | 2026-04-05 01:00:34.347828 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 01:00:34.347835 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:03.509) 0:08:56.605 ********** 2026-04-05 01:00:34.347842 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347850 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347857 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.347864 | orchestrator | 2026-04-05 01:00:34.347871 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 01:00:34.347886 | orchestrator | Sunday 05 April 2026 00:57:31 +0000 (0:00:02.545) 0:08:59.151 ********** 2026-04-05 01:00:34.347894 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347901 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347908 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-05 01:00:34.347915 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.347923 | orchestrator | 2026-04-05 01:00:34.347930 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 01:00:34.347937 | orchestrator | Sunday 05 April 2026 00:57:44 +0000 (0:00:13.204) 0:09:12.355 ********** 2026-04-05 01:00:34.347949 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.347956 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.347963 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.347986 | orchestrator | 2026-04-05 01:00:34.347993 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.348000 | orchestrator | Sunday 05 April 2026 00:57:45 +0000 (0:00:00.899) 0:09:13.255 ********** 2026-04-05 01:00:34.348007 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348014 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348022 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348029 | orchestrator | 2026-04-05 01:00:34.348036 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 01:00:34.348043 | orchestrator | Sunday 05 April 2026 00:57:46 +0000 (0:00:00.644) 0:09:13.900 ********** 2026-04-05 01:00:34.348050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.348057 | orchestrator | 2026-04-05 01:00:34.348064 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 01:00:34.348072 | orchestrator | Sunday 05 April 2026 00:57:46 +0000 (0:00:00.579) 0:09:14.479 ********** 2026-04-05 01:00:34.348079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.348086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.348093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.348100 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348106 | orchestrator | 2026-04-05 01:00:34.348112 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 01:00:34.348118 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.396) 0:09:14.876 ********** 2026-04-05 01:00:34.348125 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348131 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348137 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348145 | orchestrator | 2026-04-05 01:00:34.348152 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 01:00:34.348158 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.321) 0:09:15.197 ********** 2026-04-05 01:00:34.348166 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348173 | orchestrator | 2026-04-05 01:00:34.348180 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 01:00:34.348187 | orchestrator | Sunday 05 April 2026 00:57:47 +0000 (0:00:00.228) 0:09:15.426 ********** 2026-04-05 01:00:34.348194 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348201 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348208 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348215 | orchestrator | 2026-04-05 01:00:34.348222 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 01:00:34.348229 | orchestrator | Sunday 05 April 2026 00:57:48 +0000 (0:00:00.590) 0:09:16.016 ********** 2026-04-05 01:00:34.348236 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348243 | orchestrator | 2026-04-05 01:00:34.348251 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 01:00:34.348258 | orchestrator | Sunday 05 April 2026 00:57:48 +0000 (0:00:00.239) 0:09:16.256 ********** 2026-04-05 01:00:34.348265 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348271 | orchestrator | 2026-04-05 01:00:34.348277 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 01:00:34.348283 | orchestrator | Sunday 05 April 2026 00:57:48 +0000 (0:00:00.230) 0:09:16.486 ********** 2026-04-05 01:00:34.348290 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348297 | orchestrator | 2026-04-05 01:00:34.348305 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 01:00:34.348312 | orchestrator | Sunday 05 April 2026 00:57:48 +0000 (0:00:00.134) 0:09:16.620 ********** 2026-04-05 01:00:34.348325 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348333 | orchestrator | 2026-04-05 01:00:34.348344 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 01:00:34.348352 | orchestrator | Sunday 05 April 2026 00:57:49 +0000 (0:00:00.254) 0:09:16.875 ********** 2026-04-05 01:00:34.348359 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348366 | orchestrator | 2026-04-05 01:00:34.348374 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 01:00:34.348381 | orchestrator | Sunday 05 April 2026 00:57:49 +0000 (0:00:00.238) 0:09:17.113 ********** 2026-04-05 01:00:34.348388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.348395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.348401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.348407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348413 | orchestrator | 2026-04-05 01:00:34.348420 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 01:00:34.348427 | orchestrator | Sunday 05 April 2026 00:57:49 +0000 (0:00:00.420) 0:09:17.534 ********** 2026-04-05 01:00:34.348434 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348442 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348448 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348455 | orchestrator | 2026-04-05 01:00:34.348462 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 01:00:34.348473 | orchestrator | Sunday 05 April 2026 00:57:49 +0000 (0:00:00.323) 0:09:17.858 ********** 2026-04-05 01:00:34.348480 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348487 | orchestrator | 2026-04-05 01:00:34.348495 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 01:00:34.348502 | orchestrator | Sunday 05 April 2026 00:57:50 +0000 (0:00:00.928) 0:09:18.786 ********** 2026-04-05 01:00:34.348510 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348518 | orchestrator | 2026-04-05 01:00:34.348526 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-05 01:00:34.348533 | orchestrator | 2026-04-05 01:00:34.348541 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.348548 | orchestrator | Sunday 05 April 2026 00:57:51 +0000 (0:00:00.692) 0:09:19.479 ********** 2026-04-05 01:00:34.348556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.348564 | orchestrator | 2026-04-05 01:00:34.348570 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.348576 | orchestrator | Sunday 05 April 2026 00:57:52 +0000 (0:00:01.250) 0:09:20.730 ********** 2026-04-05 01:00:34.348582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.348588 | orchestrator | 2026-04-05 01:00:34.348595 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.348601 | orchestrator | Sunday 05 April 2026 00:57:54 +0000 (0:00:01.277) 0:09:22.008 ********** 2026-04-05 01:00:34.348607 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348613 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348620 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348626 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.348633 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.348639 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.348646 | orchestrator | 2026-04-05 01:00:34.348652 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.348659 | orchestrator | Sunday 05 April 2026 00:57:55 +0000 (0:00:01.072) 0:09:23.080 ********** 2026-04-05 01:00:34.348666 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.348677 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.348684 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.348690 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.348697 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.348702 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.348708 | orchestrator | 2026-04-05 01:00:34.348715 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.348722 | orchestrator | Sunday 05 April 2026 00:57:56 +0000 (0:00:01.005) 0:09:24.086 ********** 2026-04-05 01:00:34.348730 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.348737 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.348744 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.348751 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.348759 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.348766 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.348773 | orchestrator | 2026-04-05 01:00:34.348780 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.348787 | orchestrator | Sunday 05 April 2026 00:57:57 +0000 (0:00:00.782) 0:09:24.868 ********** 2026-04-05 01:00:34.348794 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.348801 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.348809 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.348816 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.348823 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.348830 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.348837 | orchestrator | 2026-04-05 01:00:34.348844 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.348851 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:01.022) 0:09:25.891 ********** 2026-04-05 01:00:34.348858 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348865 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348873 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348880 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.348887 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.348894 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.348901 | orchestrator | 2026-04-05 01:00:34.348909 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.348916 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:01.095) 0:09:26.986 ********** 2026-04-05 01:00:34.348923 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.348930 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.348942 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.348950 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.348957 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.348964 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349010 | orchestrator | 2026-04-05 01:00:34.349018 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.349025 | orchestrator | Sunday 05 April 2026 00:58:00 +0000 (0:00:00.903) 0:09:27.890 ********** 2026-04-05 01:00:34.349032 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349039 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349047 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349061 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349068 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349074 | orchestrator | 2026-04-05 01:00:34.349081 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.349088 | orchestrator | Sunday 05 April 2026 00:58:00 +0000 (0:00:00.649) 0:09:28.539 ********** 2026-04-05 01:00:34.349096 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349103 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349110 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349117 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349124 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349138 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349145 | orchestrator | 2026-04-05 01:00:34.349158 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.349166 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:01.371) 0:09:29.910 ********** 2026-04-05 01:00:34.349173 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349180 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349185 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349191 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349197 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349204 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349211 | orchestrator | 2026-04-05 01:00:34.349218 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.349225 | orchestrator | Sunday 05 April 2026 00:58:03 +0000 (0:00:01.069) 0:09:30.980 ********** 2026-04-05 01:00:34.349233 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349240 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349247 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349254 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349261 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349268 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349274 | orchestrator | 2026-04-05 01:00:34.349280 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.349286 | orchestrator | Sunday 05 April 2026 00:58:04 +0000 (0:00:00.908) 0:09:31.888 ********** 2026-04-05 01:00:34.349292 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349299 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349304 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349310 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349317 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349324 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349332 | orchestrator | 2026-04-05 01:00:34.349339 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.349346 | orchestrator | Sunday 05 April 2026 00:58:04 +0000 (0:00:00.620) 0:09:32.509 ********** 2026-04-05 01:00:34.349353 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349360 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349367 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349374 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349381 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349389 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349396 | orchestrator | 2026-04-05 01:00:34.349403 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.349410 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:00.884) 0:09:33.393 ********** 2026-04-05 01:00:34.349417 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349424 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349431 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349446 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349453 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349460 | orchestrator | 2026-04-05 01:00:34.349467 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.349474 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.644) 0:09:34.038 ********** 2026-04-05 01:00:34.349482 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349489 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349496 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349503 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349510 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349524 | orchestrator | 2026-04-05 01:00:34.349532 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.349539 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.910) 0:09:34.948 ********** 2026-04-05 01:00:34.349553 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349561 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349567 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349573 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349579 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349586 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349594 | orchestrator | 2026-04-05 01:00:34.349600 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.349607 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.640) 0:09:35.588 ********** 2026-04-05 01:00:34.349615 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349622 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349629 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349636 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:00:34.349643 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:00:34.349651 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:00:34.349658 | orchestrator | 2026-04-05 01:00:34.349665 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.349672 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.886) 0:09:36.475 ********** 2026-04-05 01:00:34.349680 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.349692 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.349699 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.349707 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349714 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349721 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349728 | orchestrator | 2026-04-05 01:00:34.349735 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.349742 | orchestrator | Sunday 05 April 2026 00:58:09 +0000 (0:00:00.634) 0:09:37.110 ********** 2026-04-05 01:00:34.349749 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349756 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349763 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349770 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349778 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349785 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349792 | orchestrator | 2026-04-05 01:00:34.349799 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.349806 | orchestrator | Sunday 05 April 2026 00:58:10 +0000 (0:00:00.981) 0:09:38.091 ********** 2026-04-05 01:00:34.349813 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.349820 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.349828 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.349835 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349842 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.349849 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.349856 | orchestrator | 2026-04-05 01:00:34.349867 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-05 01:00:34.349875 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:01.504) 0:09:39.595 ********** 2026-04-05 01:00:34.349882 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.349889 | orchestrator | 2026-04-05 01:00:34.349896 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-05 01:00:34.349903 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:04.043) 0:09:43.639 ********** 2026-04-05 01:00:34.349910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.349917 | orchestrator | 2026-04-05 01:00:34.349925 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-05 01:00:34.349932 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:01.951) 0:09:45.590 ********** 2026-04-05 01:00:34.349939 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.349946 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.349953 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.349980 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.349988 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.349995 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.350002 | orchestrator | 2026-04-05 01:00:34.350009 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-05 01:00:34.350043 | orchestrator | Sunday 05 April 2026 00:58:19 +0000 (0:00:01.821) 0:09:47.411 ********** 2026-04-05 01:00:34.350051 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.350058 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.350065 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.350072 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.350079 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.350086 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.350093 | orchestrator | 2026-04-05 01:00:34.350100 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-05 01:00:34.350107 | orchestrator | Sunday 05 April 2026 00:58:20 +0000 (0:00:01.412) 0:09:48.824 ********** 2026-04-05 01:00:34.350114 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.350124 | orchestrator | 2026-04-05 01:00:34.350131 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-05 01:00:34.350138 | orchestrator | Sunday 05 April 2026 00:58:22 +0000 (0:00:01.311) 0:09:50.135 ********** 2026-04-05 01:00:34.350145 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.350152 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.350159 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.350166 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.350173 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.350180 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.350187 | orchestrator | 2026-04-05 01:00:34.350195 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-05 01:00:34.350202 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:01.600) 0:09:51.736 ********** 2026-04-05 01:00:34.350209 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.350216 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.350223 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.350230 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.350237 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.350244 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.350252 | orchestrator | 2026-04-05 01:00:34.350259 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-05 01:00:34.350266 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:03.772) 0:09:55.509 ********** 2026-04-05 01:00:34.350272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:00:34.350278 | orchestrator | 2026-04-05 01:00:34.350285 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-05 01:00:34.350292 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:01.285) 0:09:56.794 ********** 2026-04-05 01:00:34.350300 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350307 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350314 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350321 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.350329 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.350336 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.350343 | orchestrator | 2026-04-05 01:00:34.350350 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-05 01:00:34.350357 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:00.618) 0:09:57.413 ********** 2026-04-05 01:00:34.350363 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.350374 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.350381 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.350393 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:00:34.350401 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:00:34.350408 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:00:34.350415 | orchestrator | 2026-04-05 01:00:34.350423 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-05 01:00:34.350430 | orchestrator | Sunday 05 April 2026 00:58:32 +0000 (0:00:02.833) 0:10:00.246 ********** 2026-04-05 01:00:34.350437 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350444 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350452 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350459 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:00:34.350466 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:00:34.350473 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:00:34.350480 | orchestrator | 2026-04-05 01:00:34.350487 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-05 01:00:34.350495 | orchestrator | 2026-04-05 01:00:34.350502 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.350509 | orchestrator | Sunday 05 April 2026 00:58:33 +0000 (0:00:00.891) 0:10:01.138 ********** 2026-04-05 01:00:34.350521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.350529 | orchestrator | 2026-04-05 01:00:34.350536 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.350543 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.855) 0:10:01.993 ********** 2026-04-05 01:00:34.350551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.350558 | orchestrator | 2026-04-05 01:00:34.350565 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.350571 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.530) 0:10:02.523 ********** 2026-04-05 01:00:34.350578 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.350584 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.350592 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.350598 | orchestrator | 2026-04-05 01:00:34.350606 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.350613 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.583) 0:10:03.106 ********** 2026-04-05 01:00:34.350620 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350627 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350634 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350641 | orchestrator | 2026-04-05 01:00:34.350649 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.350656 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.754) 0:10:03.861 ********** 2026-04-05 01:00:34.350663 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350670 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350677 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350684 | orchestrator | 2026-04-05 01:00:34.350691 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.350699 | orchestrator | Sunday 05 April 2026 00:58:36 +0000 (0:00:00.747) 0:10:04.609 ********** 2026-04-05 01:00:34.350706 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350713 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350720 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350727 | orchestrator | 2026-04-05 01:00:34.350734 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.350741 | orchestrator | Sunday 05 April 2026 00:58:37 +0000 (0:00:00.740) 0:10:05.349 ********** 2026-04-05 01:00:34.350748 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.350755 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.350762 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.350769 | orchestrator | 2026-04-05 01:00:34.350776 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.350790 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.616) 0:10:05.965 ********** 2026-04-05 01:00:34.350797 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.350805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.350812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.350819 | orchestrator | 2026-04-05 01:00:34.350826 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.350833 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.313) 0:10:06.279 ********** 2026-04-05 01:00:34.350840 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.350847 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.350854 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.350862 | orchestrator | 2026-04-05 01:00:34.350869 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.350876 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.350) 0:10:06.630 ********** 2026-04-05 01:00:34.350883 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350890 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350898 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350905 | orchestrator | 2026-04-05 01:00:34.350912 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.350919 | orchestrator | Sunday 05 April 2026 00:58:39 +0000 (0:00:00.727) 0:10:07.357 ********** 2026-04-05 01:00:34.350926 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.350934 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.350941 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.350948 | orchestrator | 2026-04-05 01:00:34.350955 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.350962 | orchestrator | Sunday 05 April 2026 00:58:40 +0000 (0:00:01.047) 0:10:08.405 ********** 2026-04-05 01:00:34.350983 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.350989 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.350995 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351001 | orchestrator | 2026-04-05 01:00:34.351008 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.351015 | orchestrator | Sunday 05 April 2026 00:58:40 +0000 (0:00:00.334) 0:10:08.739 ********** 2026-04-05 01:00:34.351022 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351035 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351042 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351050 | orchestrator | 2026-04-05 01:00:34.351057 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.351064 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:00.367) 0:10:09.107 ********** 2026-04-05 01:00:34.351071 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.351078 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.351085 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.351092 | orchestrator | 2026-04-05 01:00:34.351099 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.351106 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:00.393) 0:10:09.500 ********** 2026-04-05 01:00:34.351113 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.351120 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.351127 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.351134 | orchestrator | 2026-04-05 01:00:34.351141 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.351149 | orchestrator | Sunday 05 April 2026 00:58:42 +0000 (0:00:00.738) 0:10:10.238 ********** 2026-04-05 01:00:34.351156 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.351163 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.351170 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.351177 | orchestrator | 2026-04-05 01:00:34.351188 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.351196 | orchestrator | Sunday 05 April 2026 00:58:42 +0000 (0:00:00.372) 0:10:10.611 ********** 2026-04-05 01:00:34.351208 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351216 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351223 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351230 | orchestrator | 2026-04-05 01:00:34.351237 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.351244 | orchestrator | Sunday 05 April 2026 00:58:43 +0000 (0:00:00.313) 0:10:10.925 ********** 2026-04-05 01:00:34.351251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351258 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351265 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351271 | orchestrator | 2026-04-05 01:00:34.351278 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.351284 | orchestrator | Sunday 05 April 2026 00:58:43 +0000 (0:00:00.369) 0:10:11.294 ********** 2026-04-05 01:00:34.351291 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351299 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351306 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351313 | orchestrator | 2026-04-05 01:00:34.351320 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.351327 | orchestrator | Sunday 05 April 2026 00:58:43 +0000 (0:00:00.314) 0:10:11.608 ********** 2026-04-05 01:00:34.351334 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.351341 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.351349 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.351356 | orchestrator | 2026-04-05 01:00:34.351363 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.351370 | orchestrator | Sunday 05 April 2026 00:58:44 +0000 (0:00:00.788) 0:10:12.397 ********** 2026-04-05 01:00:34.351378 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.351385 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.351392 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.351399 | orchestrator | 2026-04-05 01:00:34.351406 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 01:00:34.351413 | orchestrator | Sunday 05 April 2026 00:58:45 +0000 (0:00:00.583) 0:10:12.981 ********** 2026-04-05 01:00:34.351420 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351428 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351435 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-05 01:00:34.351442 | orchestrator | 2026-04-05 01:00:34.351448 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-05 01:00:34.351454 | orchestrator | Sunday 05 April 2026 00:58:45 +0000 (0:00:00.677) 0:10:13.658 ********** 2026-04-05 01:00:34.351460 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.351467 | orchestrator | 2026-04-05 01:00:34.351473 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-05 01:00:34.351480 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:02.279) 0:10:15.937 ********** 2026-04-05 01:00:34.351489 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-05 01:00:34.351498 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351505 | orchestrator | 2026-04-05 01:00:34.351512 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-05 01:00:34.351519 | orchestrator | Sunday 05 April 2026 00:58:48 +0000 (0:00:00.262) 0:10:16.200 ********** 2026-04-05 01:00:34.351528 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:34.351542 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:34.351559 | orchestrator | 2026-04-05 01:00:34.351566 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-05 01:00:34.351574 | orchestrator | Sunday 05 April 2026 00:58:56 +0000 (0:00:08.192) 0:10:24.392 ********** 2026-04-05 01:00:34.351585 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:00:34.351592 | orchestrator | 2026-04-05 01:00:34.351599 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 01:00:34.351607 | orchestrator | Sunday 05 April 2026 00:59:00 +0000 (0:00:03.668) 0:10:28.061 ********** 2026-04-05 01:00:34.351614 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.351621 | orchestrator | 2026-04-05 01:00:34.351628 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 01:00:34.351635 | orchestrator | Sunday 05 April 2026 00:59:00 +0000 (0:00:00.593) 0:10:28.655 ********** 2026-04-05 01:00:34.351642 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:34.351649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-05 01:00:34.351657 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:34.351664 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 01:00:34.351671 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-05 01:00:34.351682 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-05 01:00:34.351689 | orchestrator | 2026-04-05 01:00:34.351696 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 01:00:34.351703 | orchestrator | Sunday 05 April 2026 00:59:02 +0000 (0:00:01.404) 0:10:30.059 ********** 2026-04-05 01:00:34.351710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.351717 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.351724 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.351732 | orchestrator | 2026-04-05 01:00:34.351739 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:34.351746 | orchestrator | Sunday 05 April 2026 00:59:04 +0000 (0:00:02.147) 0:10:32.206 ********** 2026-04-05 01:00:34.351753 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:34.351760 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.351767 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.351774 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:34.351782 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:34.351789 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.351796 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:34.351803 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:34.351810 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.351817 | orchestrator | 2026-04-05 01:00:34.351824 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 01:00:34.351831 | orchestrator | Sunday 05 April 2026 00:59:05 +0000 (0:00:01.218) 0:10:33.425 ********** 2026-04-05 01:00:34.351838 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.351845 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.351853 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.351860 | orchestrator | 2026-04-05 01:00:34.351867 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 01:00:34.351874 | orchestrator | Sunday 05 April 2026 00:59:08 +0000 (0:00:02.762) 0:10:36.188 ********** 2026-04-05 01:00:34.351881 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.351888 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.351901 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.351908 | orchestrator | 2026-04-05 01:00:34.351915 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 01:00:34.351922 | orchestrator | Sunday 05 April 2026 00:59:08 +0000 (0:00:00.610) 0:10:36.798 ********** 2026-04-05 01:00:34.351930 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.351937 | orchestrator | 2026-04-05 01:00:34.351944 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 01:00:34.351951 | orchestrator | Sunday 05 April 2026 00:59:09 +0000 (0:00:00.555) 0:10:37.354 ********** 2026-04-05 01:00:34.351958 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.351997 | orchestrator | 2026-04-05 01:00:34.352007 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 01:00:34.352014 | orchestrator | Sunday 05 April 2026 00:59:10 +0000 (0:00:00.872) 0:10:38.226 ********** 2026-04-05 01:00:34.352022 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352029 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352037 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352044 | orchestrator | 2026-04-05 01:00:34.352051 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 01:00:34.352059 | orchestrator | Sunday 05 April 2026 00:59:11 +0000 (0:00:01.303) 0:10:39.530 ********** 2026-04-05 01:00:34.352066 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352073 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352081 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352088 | orchestrator | 2026-04-05 01:00:34.352096 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 01:00:34.352103 | orchestrator | Sunday 05 April 2026 00:59:12 +0000 (0:00:01.233) 0:10:40.763 ********** 2026-04-05 01:00:34.352110 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352118 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352125 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352133 | orchestrator | 2026-04-05 01:00:34.352140 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 01:00:34.352147 | orchestrator | Sunday 05 April 2026 00:59:14 +0000 (0:00:01.705) 0:10:42.469 ********** 2026-04-05 01:00:34.352155 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352167 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352175 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352182 | orchestrator | 2026-04-05 01:00:34.352190 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 01:00:34.352197 | orchestrator | Sunday 05 April 2026 00:59:16 +0000 (0:00:02.236) 0:10:44.706 ********** 2026-04-05 01:00:34.352205 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352212 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352220 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352227 | orchestrator | 2026-04-05 01:00:34.352235 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.352242 | orchestrator | Sunday 05 April 2026 00:59:18 +0000 (0:00:01.366) 0:10:46.072 ********** 2026-04-05 01:00:34.352250 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352257 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352265 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352271 | orchestrator | 2026-04-05 01:00:34.352277 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 01:00:34.352284 | orchestrator | Sunday 05 April 2026 00:59:19 +0000 (0:00:00.936) 0:10:47.008 ********** 2026-04-05 01:00:34.352291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.352299 | orchestrator | 2026-04-05 01:00:34.352311 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 01:00:34.352324 | orchestrator | Sunday 05 April 2026 00:59:19 +0000 (0:00:00.541) 0:10:47.550 ********** 2026-04-05 01:00:34.352331 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352339 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352347 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352354 | orchestrator | 2026-04-05 01:00:34.352361 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 01:00:34.352369 | orchestrator | Sunday 05 April 2026 00:59:20 +0000 (0:00:00.320) 0:10:47.870 ********** 2026-04-05 01:00:34.352376 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.352383 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.352391 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.352398 | orchestrator | 2026-04-05 01:00:34.352405 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 01:00:34.352413 | orchestrator | Sunday 05 April 2026 00:59:21 +0000 (0:00:01.596) 0:10:49.467 ********** 2026-04-05 01:00:34.352420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.352428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.352435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.352443 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.352450 | orchestrator | 2026-04-05 01:00:34.352457 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 01:00:34.352465 | orchestrator | Sunday 05 April 2026 00:59:22 +0000 (0:00:00.687) 0:10:50.155 ********** 2026-04-05 01:00:34.352472 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352479 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352487 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352493 | orchestrator | 2026-04-05 01:00:34.352500 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 01:00:34.352508 | orchestrator | 2026-04-05 01:00:34.352515 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 01:00:34.352521 | orchestrator | Sunday 05 April 2026 00:59:22 +0000 (0:00:00.573) 0:10:50.728 ********** 2026-04-05 01:00:34.352527 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.352533 | orchestrator | 2026-04-05 01:00:34.352540 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 01:00:34.352546 | orchestrator | Sunday 05 April 2026 00:59:23 +0000 (0:00:00.768) 0:10:51.496 ********** 2026-04-05 01:00:34.352553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.352561 | orchestrator | 2026-04-05 01:00:34.352568 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 01:00:34.352575 | orchestrator | Sunday 05 April 2026 00:59:24 +0000 (0:00:00.609) 0:10:52.106 ********** 2026-04-05 01:00:34.352582 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.352590 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.352597 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.352605 | orchestrator | 2026-04-05 01:00:34.352612 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 01:00:34.352619 | orchestrator | Sunday 05 April 2026 00:59:24 +0000 (0:00:00.309) 0:10:52.416 ********** 2026-04-05 01:00:34.352626 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352633 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352640 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352648 | orchestrator | 2026-04-05 01:00:34.352655 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 01:00:34.352662 | orchestrator | Sunday 05 April 2026 00:59:25 +0000 (0:00:00.996) 0:10:53.412 ********** 2026-04-05 01:00:34.352669 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352676 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352683 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352691 | orchestrator | 2026-04-05 01:00:34.352703 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 01:00:34.352710 | orchestrator | Sunday 05 April 2026 00:59:26 +0000 (0:00:00.787) 0:10:54.200 ********** 2026-04-05 01:00:34.352718 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352725 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352732 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352739 | orchestrator | 2026-04-05 01:00:34.352746 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 01:00:34.352753 | orchestrator | Sunday 05 April 2026 00:59:27 +0000 (0:00:00.729) 0:10:54.929 ********** 2026-04-05 01:00:34.352760 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.352768 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.352775 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.352782 | orchestrator | 2026-04-05 01:00:34.352794 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 01:00:34.352801 | orchestrator | Sunday 05 April 2026 00:59:27 +0000 (0:00:00.302) 0:10:55.232 ********** 2026-04-05 01:00:34.352809 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.352816 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.352823 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.352830 | orchestrator | 2026-04-05 01:00:34.352838 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 01:00:34.352845 | orchestrator | Sunday 05 April 2026 00:59:27 +0000 (0:00:00.614) 0:10:55.846 ********** 2026-04-05 01:00:34.352853 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.352860 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.352867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.352875 | orchestrator | 2026-04-05 01:00:34.352882 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 01:00:34.352890 | orchestrator | Sunday 05 April 2026 00:59:28 +0000 (0:00:00.334) 0:10:56.181 ********** 2026-04-05 01:00:34.352897 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352905 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352912 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352919 | orchestrator | 2026-04-05 01:00:34.352927 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 01:00:34.352939 | orchestrator | Sunday 05 April 2026 00:59:29 +0000 (0:00:00.927) 0:10:57.109 ********** 2026-04-05 01:00:34.352946 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.352954 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.352961 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.352981 | orchestrator | 2026-04-05 01:00:34.352988 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 01:00:34.352996 | orchestrator | Sunday 05 April 2026 00:59:29 +0000 (0:00:00.709) 0:10:57.818 ********** 2026-04-05 01:00:34.353003 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353010 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353017 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353025 | orchestrator | 2026-04-05 01:00:34.353032 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 01:00:34.353039 | orchestrator | Sunday 05 April 2026 00:59:30 +0000 (0:00:00.672) 0:10:58.491 ********** 2026-04-05 01:00:34.353046 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353054 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353061 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353068 | orchestrator | 2026-04-05 01:00:34.353075 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 01:00:34.353082 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:00.406) 0:10:58.897 ********** 2026-04-05 01:00:34.353090 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.353097 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.353104 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.353111 | orchestrator | 2026-04-05 01:00:34.353119 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 01:00:34.353131 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:00.352) 0:10:59.249 ********** 2026-04-05 01:00:34.353138 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.353145 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.353152 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.353159 | orchestrator | 2026-04-05 01:00:34.353167 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 01:00:34.353174 | orchestrator | Sunday 05 April 2026 00:59:31 +0000 (0:00:00.338) 0:10:59.588 ********** 2026-04-05 01:00:34.353181 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.353188 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.353195 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.353202 | orchestrator | 2026-04-05 01:00:34.353210 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 01:00:34.353217 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:00.622) 0:11:00.211 ********** 2026-04-05 01:00:34.353224 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353232 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353239 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353246 | orchestrator | 2026-04-05 01:00:34.353253 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 01:00:34.353261 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:00.357) 0:11:00.568 ********** 2026-04-05 01:00:34.353267 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353273 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353279 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353287 | orchestrator | 2026-04-05 01:00:34.353294 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 01:00:34.353301 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.328) 0:11:00.897 ********** 2026-04-05 01:00:34.353308 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353316 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353323 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353331 | orchestrator | 2026-04-05 01:00:34.353338 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 01:00:34.353345 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.313) 0:11:01.211 ********** 2026-04-05 01:00:34.353352 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.353359 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.353366 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.353373 | orchestrator | 2026-04-05 01:00:34.353381 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 01:00:34.353388 | orchestrator | Sunday 05 April 2026 00:59:33 +0000 (0:00:00.633) 0:11:01.844 ********** 2026-04-05 01:00:34.353395 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.353402 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.353409 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.353417 | orchestrator | 2026-04-05 01:00:34.353424 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 01:00:34.353431 | orchestrator | Sunday 05 April 2026 00:59:34 +0000 (0:00:00.594) 0:11:02.438 ********** 2026-04-05 01:00:34.353438 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.353446 | orchestrator | 2026-04-05 01:00:34.353453 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 01:00:34.353466 | orchestrator | Sunday 05 April 2026 00:59:35 +0000 (0:00:00.757) 0:11:03.196 ********** 2026-04-05 01:00:34.353473 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353480 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.353487 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.353494 | orchestrator | 2026-04-05 01:00:34.353501 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:34.353508 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:02.144) 0:11:05.340 ********** 2026-04-05 01:00:34.353520 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:34.353527 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:34.353534 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 01:00:34.353541 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.353548 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 01:00:34.353555 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.353562 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:34.353570 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 01:00:34.353577 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.353584 | orchestrator | 2026-04-05 01:00:34.353595 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 01:00:34.353601 | orchestrator | Sunday 05 April 2026 00:59:38 +0000 (0:00:01.346) 0:11:06.687 ********** 2026-04-05 01:00:34.353607 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.353613 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.353620 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.353626 | orchestrator | 2026-04-05 01:00:34.353633 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 01:00:34.353640 | orchestrator | Sunday 05 April 2026 00:59:39 +0000 (0:00:00.312) 0:11:06.999 ********** 2026-04-05 01:00:34.353647 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.353654 | orchestrator | 2026-04-05 01:00:34.353661 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 01:00:34.353668 | orchestrator | Sunday 05 April 2026 00:59:39 +0000 (0:00:00.806) 0:11:07.805 ********** 2026-04-05 01:00:34.353675 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.353684 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.353691 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.353698 | orchestrator | 2026-04-05 01:00:34.353705 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 01:00:34.353713 | orchestrator | Sunday 05 April 2026 00:59:40 +0000 (0:00:00.886) 0:11:08.691 ********** 2026-04-05 01:00:34.353720 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353727 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:34.353734 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353741 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:34.353748 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353756 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 01:00:34.353763 | orchestrator | 2026-04-05 01:00:34.353770 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 01:00:34.353777 | orchestrator | Sunday 05 April 2026 00:59:45 +0000 (0:00:04.488) 0:11:13.180 ********** 2026-04-05 01:00:34.353784 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353791 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.353798 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353805 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.353816 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:34.353823 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:34.353830 | orchestrator | 2026-04-05 01:00:34.353838 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 01:00:34.353845 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:02.216) 0:11:15.396 ********** 2026-04-05 01:00:34.353852 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:00:34.353859 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.353866 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:00:34.353873 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.353880 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:00:34.353887 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.353894 | orchestrator | 2026-04-05 01:00:34.353902 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 01:00:34.353913 | orchestrator | Sunday 05 April 2026 00:59:49 +0000 (0:00:01.567) 0:11:16.964 ********** 2026-04-05 01:00:34.353921 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-05 01:00:34.353928 | orchestrator | 2026-04-05 01:00:34.353935 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 01:00:34.353942 | orchestrator | Sunday 05 April 2026 00:59:49 +0000 (0:00:00.234) 0:11:17.198 ********** 2026-04-05 01:00:34.353949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.353957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.353964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.353982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.353991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.353998 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354005 | orchestrator | 2026-04-05 01:00:34.354012 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 01:00:34.354048 | orchestrator | Sunday 05 April 2026 00:59:49 +0000 (0:00:00.641) 0:11:17.840 ********** 2026-04-05 01:00:34.354055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.354063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.354070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.354078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.354085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 01:00:34.354093 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354100 | orchestrator | 2026-04-05 01:00:34.354108 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 01:00:34.354115 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:00.636) 0:11:18.476 ********** 2026-04-05 01:00:34.354123 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:34.354137 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:34.354145 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:34.354152 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:34.354160 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 01:00:34.354167 | orchestrator | 2026-04-05 01:00:34.354174 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 01:00:34.354182 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:29.753) 0:11:48.229 ********** 2026-04-05 01:00:34.354189 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354197 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.354204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.354211 | orchestrator | 2026-04-05 01:00:34.354219 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 01:00:34.354226 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:00.321) 0:11:48.551 ********** 2026-04-05 01:00:34.354234 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354241 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.354249 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.354256 | orchestrator | 2026-04-05 01:00:34.354264 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 01:00:34.354271 | orchestrator | Sunday 05 April 2026 01:00:21 +0000 (0:00:00.705) 0:11:49.256 ********** 2026-04-05 01:00:34.354277 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.354283 | orchestrator | 2026-04-05 01:00:34.354290 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 01:00:34.354298 | orchestrator | Sunday 05 April 2026 01:00:22 +0000 (0:00:00.671) 0:11:49.928 ********** 2026-04-05 01:00:34.354305 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.354313 | orchestrator | 2026-04-05 01:00:34.354325 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 01:00:34.354333 | orchestrator | Sunday 05 April 2026 01:00:22 +0000 (0:00:00.867) 0:11:50.796 ********** 2026-04-05 01:00:34.354340 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.354348 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.354355 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.354363 | orchestrator | 2026-04-05 01:00:34.354370 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 01:00:34.354377 | orchestrator | Sunday 05 April 2026 01:00:24 +0000 (0:00:01.396) 0:11:52.192 ********** 2026-04-05 01:00:34.354385 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.354392 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.354400 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.354407 | orchestrator | 2026-04-05 01:00:34.354414 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 01:00:34.354422 | orchestrator | Sunday 05 April 2026 01:00:25 +0000 (0:00:01.042) 0:11:53.235 ********** 2026-04-05 01:00:34.354429 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:00:34.354437 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:00:34.354444 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:00:34.354451 | orchestrator | 2026-04-05 01:00:34.354459 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 01:00:34.354471 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:01.641) 0:11:54.876 ********** 2026-04-05 01:00:34.354486 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.354494 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.354501 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 01:00:34.354508 | orchestrator | 2026-04-05 01:00:34.354516 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 01:00:34.354524 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:02.595) 0:11:57.472 ********** 2026-04-05 01:00:34.354530 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354538 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.354545 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.354553 | orchestrator | 2026-04-05 01:00:34.354560 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 01:00:34.354567 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:00.381) 0:11:57.853 ********** 2026-04-05 01:00:34.354574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:34.354582 | orchestrator | 2026-04-05 01:00:34.354590 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 01:00:34.354597 | orchestrator | Sunday 05 April 2026 01:00:30 +0000 (0:00:00.950) 0:11:58.803 ********** 2026-04-05 01:00:34.354605 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.354612 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.354619 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.354627 | orchestrator | 2026-04-05 01:00:34.354634 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 01:00:34.354642 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.370) 0:11:59.173 ********** 2026-04-05 01:00:34.354649 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:34.354664 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:34.354671 | orchestrator | 2026-04-05 01:00:34.354678 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 01:00:34.354684 | orchestrator | Sunday 05 April 2026 01:00:31 +0000 (0:00:00.370) 0:11:59.543 ********** 2026-04-05 01:00:34.354690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:34.354697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:34.354704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:34.354711 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:34.354718 | orchestrator | 2026-04-05 01:00:34.354725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 01:00:34.354733 | orchestrator | Sunday 05 April 2026 01:00:32 +0000 (0:00:00.898) 0:12:00.442 ********** 2026-04-05 01:00:34.354740 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:34.354747 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:34.354754 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:34.354761 | orchestrator | 2026-04-05 01:00:34.354769 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:00:34.354776 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-05 01:00:34.354784 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-05 01:00:34.354792 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-05 01:00:34.354799 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-05 01:00:34.354815 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-05 01:00:34.354827 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-05 01:00:34.354835 | orchestrator | 2026-04-05 01:00:34.354842 | orchestrator | 2026-04-05 01:00:34.354850 | orchestrator | 2026-04-05 01:00:34.354857 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:00:34.354865 | orchestrator | Sunday 05 April 2026 01:00:33 +0000 (0:00:00.529) 0:12:00.971 ********** 2026-04-05 01:00:34.354872 | orchestrator | =============================================================================== 2026-04-05 01:00:34.354880 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 63.08s 2026-04-05 01:00:34.354887 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.76s 2026-04-05 01:00:34.354894 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.48s 2026-04-05 01:00:34.354902 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.75s 2026-04-05 01:00:34.354909 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.84s 2026-04-05 01:00:34.354917 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.21s 2026-04-05 01:00:34.354924 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.20s 2026-04-05 01:00:34.354935 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.60s 2026-04-05 01:00:34.354943 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.38s 2026-04-05 01:00:34.354951 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.19s 2026-04-05 01:00:34.354958 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.35s 2026-04-05 01:00:34.354965 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.58s 2026-04-05 01:00:34.355008 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 4.95s 2026-04-05 01:00:34.355015 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.82s 2026-04-05 01:00:34.355022 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.49s 2026-04-05 01:00:34.355029 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.42s 2026-04-05 01:00:34.355036 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.23s 2026-04-05 01:00:34.355044 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.08s 2026-04-05 01:00:34.355051 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.04s 2026-04-05 01:00:34.355058 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.97s 2026-04-05 01:00:34.355065 | orchestrator | 2026-04-05 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:37.383789 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:37.384423 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task 4b689868-9cb0-4085-9617-92ada9edecb2 is in state SUCCESS 2026-04-05 01:00:37.385955 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:37.387593 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:37.388136 | orchestrator | 2026-04-05 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:40.431202 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:40.434224 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:40.435691 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:40.435722 | orchestrator | 2026-04-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:43.478575 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:43.481760 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:43.485109 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:43.485175 | orchestrator | 2026-04-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:46.530196 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state STARTED 2026-04-05 01:00:46.531584 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:46.532891 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:46.533006 | orchestrator | 2026-04-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:49.596365 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task b0b9cc08-715f-4fc2-aed6-26d9457b370e is in state SUCCESS 2026-04-05 01:00:49.598508 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:49.598728 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:49.598750 | orchestrator | 2026-04-05 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:52.651491 | orchestrator | 2026-04-05 01:00:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:52.654308 | orchestrator | 2026-04-05 01:00:52 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:52.654443 | orchestrator | 2026-04-05 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:55.696447 | orchestrator | 2026-04-05 01:00:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:55.697093 | orchestrator | 2026-04-05 01:00:55 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:55.697133 | orchestrator | 2026-04-05 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:58.749815 | orchestrator | 2026-04-05 01:00:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:00:58.749896 | orchestrator | 2026-04-05 01:00:58 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:00:58.749906 | orchestrator | 2026-04-05 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:01.792151 | orchestrator | 2026-04-05 01:01:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:01.795380 | orchestrator | 2026-04-05 01:01:01 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:01.796322 | orchestrator | 2026-04-05 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:04.847817 | orchestrator | 2026-04-05 01:01:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:04.851352 | orchestrator | 2026-04-05 01:01:04 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:04.851501 | orchestrator | 2026-04-05 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:07.890513 | orchestrator | 2026-04-05 01:01:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:07.891811 | orchestrator | 2026-04-05 01:01:07 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:07.891861 | orchestrator | 2026-04-05 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:10.941883 | orchestrator | 2026-04-05 01:01:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:10.944578 | orchestrator | 2026-04-05 01:01:10 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:10.944878 | orchestrator | 2026-04-05 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:13.999058 | orchestrator | 2026-04-05 01:01:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:14.001115 | orchestrator | 2026-04-05 01:01:14 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:14.001231 | orchestrator | 2026-04-05 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:17.046787 | orchestrator | 2026-04-05 01:01:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:17.048667 | orchestrator | 2026-04-05 01:01:17 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:17.048704 | orchestrator | 2026-04-05 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:20.096856 | orchestrator | 2026-04-05 01:01:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:20.098692 | orchestrator | 2026-04-05 01:01:20 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:20.098754 | orchestrator | 2026-04-05 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:23.144112 | orchestrator | 2026-04-05 01:01:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:23.145538 | orchestrator | 2026-04-05 01:01:23 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:23.145605 | orchestrator | 2026-04-05 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:26.193439 | orchestrator | 2026-04-05 01:01:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:26.196157 | orchestrator | 2026-04-05 01:01:26 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:26.196253 | orchestrator | 2026-04-05 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:29.237318 | orchestrator | 2026-04-05 01:01:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:29.240088 | orchestrator | 2026-04-05 01:01:29 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:29.240138 | orchestrator | 2026-04-05 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:32.280967 | orchestrator | 2026-04-05 01:01:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:32.283254 | orchestrator | 2026-04-05 01:01:32 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:32.283336 | orchestrator | 2026-04-05 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:35.337122 | orchestrator | 2026-04-05 01:01:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:35.338807 | orchestrator | 2026-04-05 01:01:35 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:35.338894 | orchestrator | 2026-04-05 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:38.392342 | orchestrator | 2026-04-05 01:01:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:38.394331 | orchestrator | 2026-04-05 01:01:38 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:38.394394 | orchestrator | 2026-04-05 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:41.447276 | orchestrator | 2026-04-05 01:01:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:41.450308 | orchestrator | 2026-04-05 01:01:41 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:41.450383 | orchestrator | 2026-04-05 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:44.507285 | orchestrator | 2026-04-05 01:01:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:44.508312 | orchestrator | 2026-04-05 01:01:44 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:44.508394 | orchestrator | 2026-04-05 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:47.569810 | orchestrator | 2026-04-05 01:01:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:47.569963 | orchestrator | 2026-04-05 01:01:47 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:47.569982 | orchestrator | 2026-04-05 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:50.625759 | orchestrator | 2026-04-05 01:01:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:50.628292 | orchestrator | 2026-04-05 01:01:50 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:50.628361 | orchestrator | 2026-04-05 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:53.670960 | orchestrator | 2026-04-05 01:01:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:53.672156 | orchestrator | 2026-04-05 01:01:53 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:53.672206 | orchestrator | 2026-04-05 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:56.724326 | orchestrator | 2026-04-05 01:01:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:56.726119 | orchestrator | 2026-04-05 01:01:56 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:56.726203 | orchestrator | 2026-04-05 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:59.773336 | orchestrator | 2026-04-05 01:01:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:01:59.774652 | orchestrator | 2026-04-05 01:01:59 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:01:59.774684 | orchestrator | 2026-04-05 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:02.826380 | orchestrator | 2026-04-05 01:02:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:02.828089 | orchestrator | 2026-04-05 01:02:02 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:02.828131 | orchestrator | 2026-04-05 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:05.874417 | orchestrator | 2026-04-05 01:02:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:05.877487 | orchestrator | 2026-04-05 01:02:05 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:05.877557 | orchestrator | 2026-04-05 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:08.921712 | orchestrator | 2026-04-05 01:02:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:08.922370 | orchestrator | 2026-04-05 01:02:08 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:08.922402 | orchestrator | 2026-04-05 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:11.964339 | orchestrator | 2026-04-05 01:02:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:11.965126 | orchestrator | 2026-04-05 01:02:11 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:11.965163 | orchestrator | 2026-04-05 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:15.015799 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:15.019687 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:15.019751 | orchestrator | 2026-04-05 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:18.062329 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:18.064360 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:18.064433 | orchestrator | 2026-04-05 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:21.110541 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:21.112214 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:21.112438 | orchestrator | 2026-04-05 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:24.162131 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:24.165016 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:24.165106 | orchestrator | 2026-04-05 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:27.204028 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:27.206485 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:27.206539 | orchestrator | 2026-04-05 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:30.257286 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:30.259288 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:30.259395 | orchestrator | 2026-04-05 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:33.314953 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:33.317043 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:33.317089 | orchestrator | 2026-04-05 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:36.363371 | orchestrator | 2026-04-05 01:02:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:36.366839 | orchestrator | 2026-04-05 01:02:36 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:36.366926 | orchestrator | 2026-04-05 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:39.416471 | orchestrator | 2026-04-05 01:02:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:39.418330 | orchestrator | 2026-04-05 01:02:39 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:39.418371 | orchestrator | 2026-04-05 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:42.455648 | orchestrator | 2026-04-05 01:02:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:42.459084 | orchestrator | 2026-04-05 01:02:42 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:42.459125 | orchestrator | 2026-04-05 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:45.499528 | orchestrator | 2026-04-05 01:02:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:45.501125 | orchestrator | 2026-04-05 01:02:45 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state STARTED 2026-04-05 01:02:45.501163 | orchestrator | 2026-04-05 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:48.545052 | orchestrator | 2026-04-05 01:02:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:48.550313 | orchestrator | 2026-04-05 01:02:48 | INFO  | Task 4acc7531-ffc8-4d84-bc68-42bb37bfe102 is in state SUCCESS 2026-04-05 01:02:48.552318 | orchestrator | 2026-04-05 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:48.554299 | orchestrator | 2026-04-05 01:02:48.554438 | orchestrator | 2026-04-05 01:02:48.554462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:48.554479 | orchestrator | 2026-04-05 01:02:48.554495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:48.554512 | orchestrator | Sunday 05 April 2026 00:59:41 +0000 (0:00:00.347) 0:00:00.347 ********** 2026-04-05 01:02:48.554528 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:48.554546 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:48.554561 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:48.554578 | orchestrator | 2026-04-05 01:02:48.554596 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:48.554614 | orchestrator | Sunday 05 April 2026 00:59:42 +0000 (0:00:00.378) 0:00:00.726 ********** 2026-04-05 01:02:48.554632 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-05 01:02:48.554650 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-05 01:02:48.554663 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-05 01:02:48.554675 | orchestrator | 2026-04-05 01:02:48.554687 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-05 01:02:48.554698 | orchestrator | 2026-04-05 01:02:48.554710 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:02:48.554726 | orchestrator | Sunday 05 April 2026 00:59:42 +0000 (0:00:00.439) 0:00:01.165 ********** 2026-04-05 01:02:48.554745 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:48.554765 | orchestrator | 2026-04-05 01:02:48.554782 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-05 01:02:48.554800 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.675) 0:00:01.841 ********** 2026-04-05 01:02:48.555017 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-05 01:02:48.555042 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-05 01:02:48.555058 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-05 01:02:48.555103 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-05 01:02:48.555120 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-05 01:02:48.555137 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 01:02:48.555153 | orchestrator | 2026-04-05 01:02:48.555166 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:48.555180 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555196 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555212 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555220 | orchestrator | 2026-04-05 01:02:48.555228 | orchestrator | 2026-04-05 01:02:48.555237 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:48.555245 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:53.329) 0:00:55.171 ********** 2026-04-05 01:02:48.555253 | orchestrator | =============================================================================== 2026-04-05 01:02:48.555261 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.33s 2026-04-05 01:02:48.555269 | orchestrator | placement : include_tasks ----------------------------------------------- 0.68s 2026-04-05 01:02:48.555278 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-05 01:02:48.555286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-05 01:02:48.555294 | orchestrator | 2026-04-05 01:02:48.555302 | orchestrator | 2026-04-05 01:02:48.555310 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:48.555318 | orchestrator | 2026-04-05 01:02:48.555326 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:48.555334 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.342) 0:00:00.342 ********** 2026-04-05 01:02:48.555346 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:48.555359 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:48.555377 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:48.555393 | orchestrator | 2026-04-05 01:02:48.555406 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:48.555419 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.321) 0:00:00.663 ********** 2026-04-05 01:02:48.555432 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-05 01:02:48.555459 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-05 01:02:48.555473 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-05 01:02:48.555485 | orchestrator | 2026-04-05 01:02:48.555498 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-05 01:02:48.555511 | orchestrator | 2026-04-05 01:02:48.555541 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:02:48.555555 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.327) 0:00:00.990 ********** 2026-04-05 01:02:48.555582 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:48.555596 | orchestrator | 2026-04-05 01:02:48.555609 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-05 01:02:48.555622 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.710) 0:00:01.701 ********** 2026-04-05 01:02:48.555636 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-05 01:02:48.555645 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-05 01:02:48.555653 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-05 01:02:48.555661 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-05 01:02:48.555669 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-05 01:02:48.555678 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 01:02:48.555691 | orchestrator | 2026-04-05 01:02:48.555699 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:48.555707 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555715 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555723 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:48.555731 | orchestrator | 2026-04-05 01:02:48.555738 | orchestrator | 2026-04-05 01:02:48.555750 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:48.555763 | orchestrator | Sunday 05 April 2026 01:00:48 +0000 (0:00:53.580) 0:00:55.281 ********** 2026-04-05 01:02:48.555777 | orchestrator | =============================================================================== 2026-04-05 01:02:48.555790 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.58s 2026-04-05 01:02:48.555803 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.71s 2026-04-05 01:02:48.555816 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-04-05 01:02:48.555830 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-05 01:02:48.555844 | orchestrator | 2026-04-05 01:02:48.555909 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:02:48.555923 | orchestrator | 2.16.14 2026-04-05 01:02:48.555932 | orchestrator | 2026-04-05 01:02:48.555940 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-05 01:02:48.555948 | orchestrator | 2026-04-05 01:02:48.555956 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 01:02:48.555964 | orchestrator | Sunday 05 April 2026 01:00:38 +0000 (0:00:00.607) 0:00:00.607 ********** 2026-04-05 01:02:48.555972 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:02:48.555980 | orchestrator | 2026-04-05 01:02:48.555988 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 01:02:48.555996 | orchestrator | Sunday 05 April 2026 01:00:39 +0000 (0:00:00.654) 0:00:01.262 ********** 2026-04-05 01:02:48.556004 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556020 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556028 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556036 | orchestrator | 2026-04-05 01:02:48.556044 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 01:02:48.556052 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:01.063) 0:00:02.326 ********** 2026-04-05 01:02:48.556060 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556068 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556076 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556084 | orchestrator | 2026-04-05 01:02:48.556092 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 01:02:48.556100 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:00.334) 0:00:02.660 ********** 2026-04-05 01:02:48.556108 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556116 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556123 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556131 | orchestrator | 2026-04-05 01:02:48.556139 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 01:02:48.556147 | orchestrator | Sunday 05 April 2026 01:00:41 +0000 (0:00:00.842) 0:00:03.503 ********** 2026-04-05 01:02:48.556155 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556170 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556178 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556186 | orchestrator | 2026-04-05 01:02:48.556194 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 01:02:48.556211 | orchestrator | Sunday 05 April 2026 01:00:41 +0000 (0:00:00.366) 0:00:03.870 ********** 2026-04-05 01:02:48.556219 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556227 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556235 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556242 | orchestrator | 2026-04-05 01:02:48.556250 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 01:02:48.556258 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.354) 0:00:04.224 ********** 2026-04-05 01:02:48.556266 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556274 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556282 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556290 | orchestrator | 2026-04-05 01:02:48.556297 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 01:02:48.556305 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.361) 0:00:04.586 ********** 2026-04-05 01:02:48.556313 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.556321 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.556329 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.556337 | orchestrator | 2026-04-05 01:02:48.556345 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 01:02:48.556353 | orchestrator | Sunday 05 April 2026 01:00:42 +0000 (0:00:00.570) 0:00:05.156 ********** 2026-04-05 01:02:48.556361 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556369 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556376 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556384 | orchestrator | 2026-04-05 01:02:48.556392 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 01:02:48.556400 | orchestrator | Sunday 05 April 2026 01:00:43 +0000 (0:00:00.307) 0:00:05.463 ********** 2026-04-05 01:02:48.556407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:48.556416 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:48.556423 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:48.556431 | orchestrator | 2026-04-05 01:02:48.556439 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 01:02:48.556447 | orchestrator | Sunday 05 April 2026 01:00:43 +0000 (0:00:00.649) 0:00:06.113 ********** 2026-04-05 01:02:48.556455 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556468 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556476 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556484 | orchestrator | 2026-04-05 01:02:48.556491 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 01:02:48.556499 | orchestrator | Sunday 05 April 2026 01:00:44 +0000 (0:00:00.434) 0:00:06.548 ********** 2026-04-05 01:02:48.556507 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:48.556515 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:48.556522 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:48.556530 | orchestrator | 2026-04-05 01:02:48.556538 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 01:02:48.556546 | orchestrator | Sunday 05 April 2026 01:00:47 +0000 (0:00:03.083) 0:00:09.632 ********** 2026-04-05 01:02:48.556554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:02:48.556562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:02:48.556570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:02:48.556577 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.556585 | orchestrator | 2026-04-05 01:02:48.556593 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 01:02:48.556601 | orchestrator | Sunday 05 April 2026 01:00:47 +0000 (0:00:00.432) 0:00:10.064 ********** 2026-04-05 01:02:48.556611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556638 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.556646 | orchestrator | 2026-04-05 01:02:48.556654 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 01:02:48.556661 | orchestrator | Sunday 05 April 2026 01:00:48 +0000 (0:00:00.956) 0:00:11.020 ********** 2026-04-05 01:02:48.556675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.556714 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.556722 | orchestrator | 2026-04-05 01:02:48.556730 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 01:02:48.556737 | orchestrator | Sunday 05 April 2026 01:00:49 +0000 (0:00:00.171) 0:00:11.192 ********** 2026-04-05 01:02:48.556747 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'da83d99c61d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 01:00:45.359918', 'end': '2026-04-05 01:00:45.404838', 'delta': '0:00:00.044920', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da83d99c61d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:48.556759 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '03b90fc15bdb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 01:00:46.434130', 'end': '2026-04-05 01:00:46.482642', 'delta': '0:00:00.048512', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['03b90fc15bdb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:48.556768 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a10994bb6c61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 01:00:47.285655', 'end': '2026-04-05 01:00:47.322164', 'delta': '0:00:00.036509', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a10994bb6c61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 01:02:48.556776 | orchestrator | 2026-04-05 01:02:48.556784 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 01:02:48.556792 | orchestrator | Sunday 05 April 2026 01:00:49 +0000 (0:00:00.389) 0:00:11.581 ********** 2026-04-05 01:02:48.556800 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.556808 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.556816 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.556824 | orchestrator | 2026-04-05 01:02:48.556831 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 01:02:48.556839 | orchestrator | Sunday 05 April 2026 01:00:49 +0000 (0:00:00.426) 0:00:12.008 ********** 2026-04-05 01:02:48.556896 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 01:02:48.556912 | orchestrator | 2026-04-05 01:02:48.556925 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 01:02:48.556939 | orchestrator | Sunday 05 April 2026 01:00:51 +0000 (0:00:01.750) 0:00:13.759 ********** 2026-04-05 01:02:48.556952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.556964 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.556984 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.556999 | orchestrator | 2026-04-05 01:02:48.557013 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 01:02:48.557026 | orchestrator | Sunday 05 April 2026 01:00:51 +0000 (0:00:00.325) 0:00:14.084 ********** 2026-04-05 01:02:48.557057 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557073 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557086 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557099 | orchestrator | 2026-04-05 01:02:48.557113 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:02:48.557126 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:00.452) 0:00:14.536 ********** 2026-04-05 01:02:48.557140 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557153 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557167 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557180 | orchestrator | 2026-04-05 01:02:48.557194 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 01:02:48.557209 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:00.491) 0:00:15.028 ********** 2026-04-05 01:02:48.557222 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.557235 | orchestrator | 2026-04-05 01:02:48.557248 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 01:02:48.557260 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:00.134) 0:00:15.163 ********** 2026-04-05 01:02:48.557268 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557276 | orchestrator | 2026-04-05 01:02:48.557284 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:02:48.557291 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.231) 0:00:15.394 ********** 2026-04-05 01:02:48.557299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557307 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557315 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557323 | orchestrator | 2026-04-05 01:02:48.557331 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 01:02:48.557338 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.338) 0:00:15.733 ********** 2026-04-05 01:02:48.557346 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557362 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557370 | orchestrator | 2026-04-05 01:02:48.557378 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 01:02:48.557386 | orchestrator | Sunday 05 April 2026 01:00:53 +0000 (0:00:00.331) 0:00:16.064 ********** 2026-04-05 01:02:48.557394 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557401 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557409 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557417 | orchestrator | 2026-04-05 01:02:48.557425 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 01:02:48.557432 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.545) 0:00:16.610 ********** 2026-04-05 01:02:48.557440 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557448 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557456 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557464 | orchestrator | 2026-04-05 01:02:48.557471 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 01:02:48.557479 | orchestrator | Sunday 05 April 2026 01:00:54 +0000 (0:00:00.341) 0:00:16.952 ********** 2026-04-05 01:02:48.557487 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557495 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557503 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557511 | orchestrator | 2026-04-05 01:02:48.557518 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 01:02:48.557526 | orchestrator | Sunday 05 April 2026 01:00:55 +0000 (0:00:00.391) 0:00:17.344 ********** 2026-04-05 01:02:48.557534 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557542 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557550 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557558 | orchestrator | 2026-04-05 01:02:48.557580 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 01:02:48.557588 | orchestrator | Sunday 05 April 2026 01:00:55 +0000 (0:00:00.455) 0:00:17.800 ********** 2026-04-05 01:02:48.557596 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.557604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.557612 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.557619 | orchestrator | 2026-04-05 01:02:48.557627 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 01:02:48.557635 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.564) 0:00:18.365 ********** 2026-04-05 01:02:48.557645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87', 'dm-uuid-LVM-nQrLrg0BqHGe1A9RVbz4Nu5m0j1vrxufGT7BkWGPm6gLoI0ePIQomnNlHNuIq6pw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670', 'dm-uuid-LVM-pJGTtVd0YecZ46sZFLKOsdsVdlJcVA2onJ2hK2zOqpuPcYhfTRgtwIbvSdlkRXVQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff', 'dm-uuid-LVM-SdC8ztndVEjqDn76uiYoCnN9YKXW866zw4C7S5cpDRFMGMeV03iItzmsABbAOW1Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.557791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621', 'dm-uuid-LVM-40atcOoPTn2r7zcM8xjzJcp5DSddbcu8P5CKkZSxNZ31yxB89hSU7w4vE8f6IoHH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iMmvsK-0VwP-4LtN-JgAY-8KwJ-Qkzj-Sd6GTm', 'scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087', 'scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.557809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rUfKFI-2hLM-RLIH-NWWZ-lLs3-Hr3n-6MCkAD', 'scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966', 'scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.557837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30', 'scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.557877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.557913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.557949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GyMLKF-vHry-RHc8-7cfV-tFf6-qbXV-SENsCI', 'scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2', 'scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.558752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6y2lAn-RokD-h7cF-v8Tu-gO13-n3Fe-OrwFWK', 'scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767', 'scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92', 'scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558785 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.558794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603', 'dm-uuid-LVM-64tzOHgG53FLXCSb5I0VPAT3nsukjOL16ewcjji0zoeq4oyylltpfn74y4tIzZcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418', 'dm-uuid-LVM-5ExjhmqCLQRr1pQ6CfVmcM7UkPJni7dcskxYgjKgsBz0rDCEmGj1VvwWLGVzvCuY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:02:48.558959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a29ITw-PgrV-2Yfg-fVgD-Du8V-njBE-NtDgfI', 'scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9', 'scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5C5KmZ-D4SZ-KmL6-Wc4J-nTNr-URgw-SQyKn9', 'scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304', 'scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.558999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d', 'scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.559008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:02:48.559016 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.559024 | orchestrator | 2026-04-05 01:02:48.559033 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 01:02:48.559041 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.634) 0:00:18.999 ********** 2026-04-05 01:02:48.559050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87', 'dm-uuid-LVM-nQrLrg0BqHGe1A9RVbz4Nu5m0j1vrxufGT7BkWGPm6gLoI0ePIQomnNlHNuIq6pw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559074 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670', 'dm-uuid-LVM-pJGTtVd0YecZ46sZFLKOsdsVdlJcVA2onJ2hK2zOqpuPcYhfTRgtwIbvSdlkRXVQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16', 'scsi-SQEMU_QEMU_HARDDISK_3843aef3-dc24-4830-b144-e5ace4620886-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--157b1f80--825d--547a--87b1--b4c204357e87-osd--block--157b1f80--825d--547a--87b1--b4c204357e87'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iMmvsK-0VwP-4LtN-JgAY-8KwJ-Qkzj-Sd6GTm', 'scsi-0QEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087', 'scsi-SQEMU_QEMU_HARDDISK_bbb51bc2-5c72-44e5-9d02-9dee12b3d087'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff', 'dm-uuid-LVM-SdC8ztndVEjqDn76uiYoCnN9YKXW866zw4C7S5cpDRFMGMeV03iItzmsABbAOW1Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b6d430e--d9c3--5542--869b--9d02c8b92670-osd--block--9b6d430e--d9c3--5542--869b--9d02c8b92670'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rUfKFI-2hLM-RLIH-NWWZ-lLs3-Hr3n-6MCkAD', 'scsi-0QEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966', 'scsi-SQEMU_QEMU_HARDDISK_6aa9f314-df3a-4dde-8ae5-362160a07966'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621', 'dm-uuid-LVM-40atcOoPTn2r7zcM8xjzJcp5DSddbcu8P5CKkZSxNZ31yxB89hSU7w4vE8f6IoHH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30', 'scsi-SQEMU_QEMU_HARDDISK_1177e3c7-06af-4e5c-a5c6-38f8cbd69f30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559334 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559401 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.559415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16', 'scsi-SQEMU_QEMU_HARDDISK_989468f1-c97d-420d-8f1d-aaccb4460869-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603', 'dm-uuid-LVM-64tzOHgG53FLXCSb5I0VPAT3nsukjOL16ewcjji0zoeq4oyylltpfn74y4tIzZcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff-osd--block--2cc0fb6a--bf3f--5a25--9286--a8c7d7ff4bff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GyMLKF-vHry-RHc8-7cfV-tFf6-qbXV-SENsCI', 'scsi-0QEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2', 'scsi-SQEMU_QEMU_HARDDISK_33101796-df65-4afe-85e5-47b8cf02a1f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418', 'dm-uuid-LVM-5ExjhmqCLQRr1pQ6CfVmcM7UkPJni7dcskxYgjKgsBz0rDCEmGj1VvwWLGVzvCuY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559519 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4b90bbc--8b4b--55ca--a382--2d9a937d0621-osd--block--e4b90bbc--8b4b--55ca--a382--2d9a937d0621'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6y2lAn-RokD-h7cF-v8Tu-gO13-n3Fe-OrwFWK', 'scsi-0QEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767', 'scsi-SQEMU_QEMU_HARDDISK_24ae3204-b804-4dec-a460-b72326a00767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92', 'scsi-SQEMU_QEMU_HARDDISK_b0d5e8f5-5539-4914-ae8f-3a21993d2a92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559609 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.559622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16', 'scsi-SQEMU_QEMU_HARDDISK_51f889b5-0c19-4400-81f2-c1867e388db0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f6b2ea8b--e42f--5ec6--b7af--dc106d037603-osd--block--f6b2ea8b--e42f--5ec6--b7af--dc106d037603'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a29ITw-PgrV-2Yfg-fVgD-Du8V-njBE-NtDgfI', 'scsi-0QEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9', 'scsi-SQEMU_QEMU_HARDDISK_a133f214-06af-4f92-a3a2-2d6b80cceed9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecfcc343--98df--5597--aad3--97c87b883418-osd--block--ecfcc343--98df--5597--aad3--97c87b883418'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5C5KmZ-D4SZ-KmL6-Wc4J-nTNr-URgw-SQyKn9', 'scsi-0QEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304', 'scsi-SQEMU_QEMU_HARDDISK_f1da7dba-c9cf-4b54-92a2-357ae45f4304'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559786 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d', 'scsi-SQEMU_QEMU_HARDDISK_1bba6e7f-491d-44c1-b292-643b4f29b95d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:02:48.559813 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.559827 | orchestrator | 2026-04-05 01:02:48.559843 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 01:02:48.559878 | orchestrator | Sunday 05 April 2026 01:00:57 +0000 (0:00:00.674) 0:00:19.674 ********** 2026-04-05 01:02:48.559891 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.559904 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.559918 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.559932 | orchestrator | 2026-04-05 01:02:48.559946 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 01:02:48.559968 | orchestrator | Sunday 05 April 2026 01:00:58 +0000 (0:00:00.698) 0:00:20.373 ********** 2026-04-05 01:02:48.559977 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.559985 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.559993 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.560000 | orchestrator | 2026-04-05 01:02:48.560008 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:02:48.560017 | orchestrator | Sunday 05 April 2026 01:00:58 +0000 (0:00:00.527) 0:00:20.901 ********** 2026-04-05 01:02:48.560024 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.560032 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.560040 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.560048 | orchestrator | 2026-04-05 01:02:48.560056 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:02:48.560064 | orchestrator | Sunday 05 April 2026 01:00:59 +0000 (0:00:00.673) 0:00:21.574 ********** 2026-04-05 01:02:48.560072 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560080 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560093 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560101 | orchestrator | 2026-04-05 01:02:48.560109 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:02:48.560118 | orchestrator | Sunday 05 April 2026 01:00:59 +0000 (0:00:00.330) 0:00:21.904 ********** 2026-04-05 01:02:48.560132 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560140 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560148 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560155 | orchestrator | 2026-04-05 01:02:48.560163 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:02:48.560171 | orchestrator | Sunday 05 April 2026 01:01:00 +0000 (0:00:00.442) 0:00:22.347 ********** 2026-04-05 01:02:48.560179 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560187 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560195 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560203 | orchestrator | 2026-04-05 01:02:48.560210 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 01:02:48.560218 | orchestrator | Sunday 05 April 2026 01:01:00 +0000 (0:00:00.533) 0:00:22.880 ********** 2026-04-05 01:02:48.560226 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 01:02:48.560234 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 01:02:48.560242 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 01:02:48.560250 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 01:02:48.560257 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 01:02:48.560265 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 01:02:48.560273 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 01:02:48.560281 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 01:02:48.560288 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 01:02:48.560296 | orchestrator | 2026-04-05 01:02:48.560304 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 01:02:48.560312 | orchestrator | Sunday 05 April 2026 01:01:01 +0000 (0:00:00.861) 0:00:23.742 ********** 2026-04-05 01:02:48.560320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:02:48.560328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:02:48.560336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:02:48.560344 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 01:02:48.560411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 01:02:48.560420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 01:02:48.560428 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560436 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 01:02:48.560450 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 01:02:48.560458 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 01:02:48.560466 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560474 | orchestrator | 2026-04-05 01:02:48.560482 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 01:02:48.560490 | orchestrator | Sunday 05 April 2026 01:01:01 +0000 (0:00:00.401) 0:00:24.144 ********** 2026-04-05 01:02:48.560499 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:02:48.560507 | orchestrator | 2026-04-05 01:02:48.560515 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:02:48.560525 | orchestrator | Sunday 05 April 2026 01:01:02 +0000 (0:00:00.800) 0:00:24.945 ********** 2026-04-05 01:02:48.560533 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560540 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560548 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560556 | orchestrator | 2026-04-05 01:02:48.560564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:02:48.560572 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.362) 0:00:25.307 ********** 2026-04-05 01:02:48.560580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560588 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560596 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560604 | orchestrator | 2026-04-05 01:02:48.560612 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:02:48.560619 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.327) 0:00:25.635 ********** 2026-04-05 01:02:48.560627 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560635 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.560643 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:02:48.560651 | orchestrator | 2026-04-05 01:02:48.560659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:02:48.560667 | orchestrator | Sunday 05 April 2026 01:01:03 +0000 (0:00:00.359) 0:00:25.994 ********** 2026-04-05 01:02:48.560674 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.560682 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.560690 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.560698 | orchestrator | 2026-04-05 01:02:48.560706 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:02:48.560714 | orchestrator | Sunday 05 April 2026 01:01:04 +0000 (0:00:00.627) 0:00:26.621 ********** 2026-04-05 01:02:48.560722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:48.560730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:48.560738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:48.560746 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560753 | orchestrator | 2026-04-05 01:02:48.560761 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:02:48.560773 | orchestrator | Sunday 05 April 2026 01:01:04 +0000 (0:00:00.397) 0:00:27.019 ********** 2026-04-05 01:02:48.560782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:48.560790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:48.560798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:48.560811 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560819 | orchestrator | 2026-04-05 01:02:48.560827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:02:48.560835 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:00.377) 0:00:27.396 ********** 2026-04-05 01:02:48.560843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:02:48.560873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:02:48.560895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:02:48.560909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.560922 | orchestrator | 2026-04-05 01:02:48.560933 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:02:48.560941 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:00.427) 0:00:27.824 ********** 2026-04-05 01:02:48.560949 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:48.560957 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:48.560965 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:48.560973 | orchestrator | 2026-04-05 01:02:48.560981 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:02:48.560989 | orchestrator | Sunday 05 April 2026 01:01:06 +0000 (0:00:00.361) 0:00:28.186 ********** 2026-04-05 01:02:48.560997 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:02:48.561005 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:02:48.561013 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:02:48.561021 | orchestrator | 2026-04-05 01:02:48.561029 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 01:02:48.561038 | orchestrator | Sunday 05 April 2026 01:01:06 +0000 (0:00:00.525) 0:00:28.711 ********** 2026-04-05 01:02:48.561046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:48.561054 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:48.561062 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:48.561070 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:02:48.561078 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:02:48.561086 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:02:48.561094 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:02:48.561102 | orchestrator | 2026-04-05 01:02:48.561110 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 01:02:48.561118 | orchestrator | Sunday 05 April 2026 01:01:07 +0000 (0:00:01.223) 0:00:29.935 ********** 2026-04-05 01:02:48.561126 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:02:48.561134 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:02:48.561142 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:02:48.561150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:02:48.561158 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:02:48.561166 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:02:48.561174 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:02:48.561182 | orchestrator | 2026-04-05 01:02:48.561190 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-05 01:02:48.561198 | orchestrator | Sunday 05 April 2026 01:01:10 +0000 (0:00:02.243) 0:00:32.179 ********** 2026-04-05 01:02:48.561206 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:02:48.561214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:02:48.561222 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-05 01:02:48.561230 | orchestrator | 2026-04-05 01:02:48.561238 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-05 01:02:48.561246 | orchestrator | Sunday 05 April 2026 01:01:10 +0000 (0:00:00.420) 0:00:32.599 ********** 2026-04-05 01:02:48.561256 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:48.561271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:48.561280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:48.561298 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:48.561307 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:02:48.561316 | orchestrator | 2026-04-05 01:02:48.561324 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-05 01:02:48.561332 | orchestrator | Sunday 05 April 2026 01:01:55 +0000 (0:00:44.841) 0:01:17.441 ********** 2026-04-05 01:02:48.561340 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561348 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561363 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561371 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561387 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-05 01:02:48.561395 | orchestrator | 2026-04-05 01:02:48.561403 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-05 01:02:48.561410 | orchestrator | Sunday 05 April 2026 01:02:18 +0000 (0:00:23.566) 0:01:41.008 ********** 2026-04-05 01:02:48.561418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561426 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561434 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561450 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561457 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561465 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:02:48.561473 | orchestrator | 2026-04-05 01:02:48.561481 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-05 01:02:48.561489 | orchestrator | Sunday 05 April 2026 01:02:30 +0000 (0:00:11.700) 0:01:52.709 ********** 2026-04-05 01:02:48.561497 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561505 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561513 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561535 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561543 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561559 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561566 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561574 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561582 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561590 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561598 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561606 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561614 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561621 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:02:48.561629 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:02:48.561637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:02:48.561645 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-05 01:02:48.561653 | orchestrator | 2026-04-05 01:02:48.561661 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:48.561669 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 01:02:48.561679 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 01:02:48.561691 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 01:02:48.561700 | orchestrator | 2026-04-05 01:02:48.561708 | orchestrator | 2026-04-05 01:02:48.561716 | orchestrator | 2026-04-05 01:02:48.561724 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:48.561732 | orchestrator | Sunday 05 April 2026 01:02:47 +0000 (0:00:17.257) 0:02:09.966 ********** 2026-04-05 01:02:48.561740 | orchestrator | =============================================================================== 2026-04-05 01:02:48.561748 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.84s 2026-04-05 01:02:48.561756 | orchestrator | generate keys ---------------------------------------------------------- 23.57s 2026-04-05 01:02:48.561763 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.26s 2026-04-05 01:02:48.561771 | orchestrator | get keys from monitors ------------------------------------------------- 11.70s 2026-04-05 01:02:48.561779 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.08s 2026-04-05 01:02:48.561787 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.24s 2026-04-05 01:02:48.561795 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2026-04-05 01:02:48.561803 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.22s 2026-04-05 01:02:48.561810 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.06s 2026-04-05 01:02:48.561818 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.96s 2026-04-05 01:02:48.561949 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-04-05 01:02:48.561981 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-04-05 01:02:48.561990 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-04-05 01:02:48.561998 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-04-05 01:02:48.562005 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2026-04-05 01:02:48.562013 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-04-05 01:02:48.562069 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-04-05 01:02:48.562077 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-04-05 01:02:48.562085 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2026-04-05 01:02:48.562093 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2026-04-05 01:02:51.616498 | orchestrator | 2026-04-05 01:02:51 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:02:51.619249 | orchestrator | 2026-04-05 01:02:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:51.619307 | orchestrator | 2026-04-05 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:54.668084 | orchestrator | 2026-04-05 01:02:54 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:02:54.669310 | orchestrator | 2026-04-05 01:02:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:54.669367 | orchestrator | 2026-04-05 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:57.717104 | orchestrator | 2026-04-05 01:02:57 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:02:57.720453 | orchestrator | 2026-04-05 01:02:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:02:57.720532 | orchestrator | 2026-04-05 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:00.764093 | orchestrator | 2026-04-05 01:03:00 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:00.765549 | orchestrator | 2026-04-05 01:03:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:00.765635 | orchestrator | 2026-04-05 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:03.821499 | orchestrator | 2026-04-05 01:03:03 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:03.823831 | orchestrator | 2026-04-05 01:03:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:03.823938 | orchestrator | 2026-04-05 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:06.888554 | orchestrator | 2026-04-05 01:03:06 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:06.889433 | orchestrator | 2026-04-05 01:03:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:06.889516 | orchestrator | 2026-04-05 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:09.941549 | orchestrator | 2026-04-05 01:03:09 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:09.942647 | orchestrator | 2026-04-05 01:03:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:09.942739 | orchestrator | 2026-04-05 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:12.986011 | orchestrator | 2026-04-05 01:03:12 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:12.986723 | orchestrator | 2026-04-05 01:03:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:12.986751 | orchestrator | 2026-04-05 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:16.040054 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:16.041994 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:16.042097 | orchestrator | 2026-04-05 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:19.091420 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:19.095999 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:19.098206 | orchestrator | 2026-04-05 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:22.153883 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:22.155042 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:22.155099 | orchestrator | 2026-04-05 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:25.198367 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state STARTED 2026-04-05 01:03:25.200924 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:25.201075 | orchestrator | 2026-04-05 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:28.252999 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task d4e861f9-5936-4e35-84cb-e6f63f88d307 is in state SUCCESS 2026-04-05 01:03:28.255855 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:28.258593 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:28.258702 | orchestrator | 2026-04-05 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:31.315151 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:31.317123 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:31.317180 | orchestrator | 2026-04-05 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:34.356500 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:34.357855 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:34.357927 | orchestrator | 2026-04-05 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:37.413394 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:37.415450 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:37.415523 | orchestrator | 2026-04-05 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:40.457756 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:40.458715 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:40.459022 | orchestrator | 2026-04-05 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:43.516294 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:43.518127 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:43.518169 | orchestrator | 2026-04-05 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:46.570272 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:46.570752 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:46.570788 | orchestrator | 2026-04-05 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:49.625247 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:49.627089 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:49.627129 | orchestrator | 2026-04-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:52.677594 | orchestrator | 2026-04-05 01:03:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:52.680577 | orchestrator | 2026-04-05 01:03:52 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:52.680664 | orchestrator | 2026-04-05 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:55.727177 | orchestrator | 2026-04-05 01:03:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:55.729513 | orchestrator | 2026-04-05 01:03:55 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:55.730449 | orchestrator | 2026-04-05 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:58.779145 | orchestrator | 2026-04-05 01:03:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:03:58.780958 | orchestrator | 2026-04-05 01:03:58 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:03:58.781338 | orchestrator | 2026-04-05 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:01.831999 | orchestrator | 2026-04-05 01:04:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:01.834467 | orchestrator | 2026-04-05 01:04:01 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:01.834526 | orchestrator | 2026-04-05 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:04.882070 | orchestrator | 2026-04-05 01:04:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:04.885278 | orchestrator | 2026-04-05 01:04:04 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:04.885351 | orchestrator | 2026-04-05 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:07.933013 | orchestrator | 2026-04-05 01:04:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:07.934772 | orchestrator | 2026-04-05 01:04:07 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:07.934823 | orchestrator | 2026-04-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:10.979190 | orchestrator | 2026-04-05 01:04:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:10.980532 | orchestrator | 2026-04-05 01:04:10 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:10.980847 | orchestrator | 2026-04-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:14.037060 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:14.037586 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:14.037605 | orchestrator | 2026-04-05 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:17.088214 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:17.089294 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:17.089336 | orchestrator | 2026-04-05 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:20.144834 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:20.146580 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:20.146680 | orchestrator | 2026-04-05 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:23.209326 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:23.210524 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:23.210895 | orchestrator | 2026-04-05 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:26.254111 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:26.254799 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state STARTED 2026-04-05 01:04:26.254838 | orchestrator | 2026-04-05 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:29.308852 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:29.309881 | orchestrator | 2026-04-05 01:04:29.309909 | orchestrator | 2026-04-05 01:04:29.309952 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-05 01:04:29.309968 | orchestrator | 2026-04-05 01:04:29.309984 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-05 01:04:29.310004 | orchestrator | Sunday 05 April 2026 01:02:51 +0000 (0:00:00.246) 0:00:00.246 ********** 2026-04-05 01:04:29.310070 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:04:29.310087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310103 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:04:29.310134 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310149 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:04:29.310164 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:04:29.310181 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:04:29.310196 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:04:29.310238 | orchestrator | 2026-04-05 01:04:29.310256 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-05 01:04:29.310272 | orchestrator | Sunday 05 April 2026 01:02:56 +0000 (0:00:04.795) 0:00:05.042 ********** 2026-04-05 01:04:29.310287 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:04:29.310302 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310317 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310332 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:04:29.310347 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:04:29.310378 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:04:29.310393 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:04:29.310409 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:04:29.310424 | orchestrator | 2026-04-05 01:04:29.310439 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-05 01:04:29.310454 | orchestrator | Sunday 05 April 2026 01:03:00 +0000 (0:00:04.070) 0:00:09.112 ********** 2026-04-05 01:04:29.310470 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 01:04:29.310487 | orchestrator | 2026-04-05 01:04:29.310503 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-05 01:04:29.310519 | orchestrator | Sunday 05 April 2026 01:03:01 +0000 (0:00:01.075) 0:00:10.188 ********** 2026-04-05 01:04:29.310536 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-05 01:04:29.310552 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310568 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310584 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:04:29.310599 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.310615 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-05 01:04:29.310631 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-05 01:04:29.310660 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:04:29.310677 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-05 01:04:29.310693 | orchestrator | 2026-04-05 01:04:29.310709 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-05 01:04:29.310725 | orchestrator | Sunday 05 April 2026 01:03:16 +0000 (0:00:14.953) 0:00:25.142 ********** 2026-04-05 01:04:29.310741 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-05 01:04:29.310758 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-05 01:04:29.310774 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:04:29.310790 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:04:29.310820 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:04:29.310847 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:04:29.310864 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-05 01:04:29.310880 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-05 01:04:29.310895 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-05 01:04:29.310910 | orchestrator | 2026-04-05 01:04:29.310946 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-05 01:04:29.310961 | orchestrator | Sunday 05 April 2026 01:03:19 +0000 (0:00:03.371) 0:00:28.513 ********** 2026-04-05 01:04:29.310977 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-05 01:04:29.310992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.311007 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.311021 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:04:29.311036 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:04:29.311051 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-05 01:04:29.311065 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-05 01:04:29.311081 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:04:29.311095 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-05 01:04:29.311110 | orchestrator | 2026-04-05 01:04:29.311125 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:04:29.311140 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:04:29.311155 | orchestrator | 2026-04-05 01:04:29.311170 | orchestrator | 2026-04-05 01:04:29.311185 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:04:29.311199 | orchestrator | Sunday 05 April 2026 01:03:26 +0000 (0:00:06.655) 0:00:35.169 ********** 2026-04-05 01:04:29.311214 | orchestrator | =============================================================================== 2026-04-05 01:04:29.311229 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.95s 2026-04-05 01:04:29.311243 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.66s 2026-04-05 01:04:29.311258 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.80s 2026-04-05 01:04:29.311273 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.07s 2026-04-05 01:04:29.311288 | orchestrator | Check if target directories exist --------------------------------------- 3.37s 2026-04-05 01:04:29.311302 | orchestrator | Create share directory -------------------------------------------------- 1.08s 2026-04-05 01:04:29.311316 | orchestrator | 2026-04-05 01:04:29.311332 | orchestrator | 2026-04-05 01:04:29.311347 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-05 01:04:29.311362 | orchestrator | 2026-04-05 01:04:29.311377 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-05 01:04:29.311392 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-04-05 01:04:29.311407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-05 01:04:29.311423 | orchestrator | 2026-04-05 01:04:29.311438 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-05 01:04:29.311454 | orchestrator | Sunday 05 April 2026 01:03:30 +0000 (0:00:00.203) 0:00:00.543 ********** 2026-04-05 01:04:29.311469 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-05 01:04:29.311483 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-05 01:04:29.311507 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-05 01:04:29.311522 | orchestrator | 2026-04-05 01:04:29.311537 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-05 01:04:29.311552 | orchestrator | Sunday 05 April 2026 01:03:32 +0000 (0:00:01.506) 0:00:02.050 ********** 2026-04-05 01:04:29.311573 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-05 01:04:29.311588 | orchestrator | 2026-04-05 01:04:29.311602 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-05 01:04:29.311617 | orchestrator | Sunday 05 April 2026 01:03:33 +0000 (0:00:01.120) 0:00:03.170 ********** 2026-04-05 01:04:29.311633 | orchestrator | changed: [testbed-manager] 2026-04-05 01:04:29.311648 | orchestrator | 2026-04-05 01:04:29.311662 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-05 01:04:29.311677 | orchestrator | Sunday 05 April 2026 01:03:34 +0000 (0:00:00.859) 0:00:04.030 ********** 2026-04-05 01:04:29.311691 | orchestrator | changed: [testbed-manager] 2026-04-05 01:04:29.311706 | orchestrator | 2026-04-05 01:04:29.311721 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-05 01:04:29.311737 | orchestrator | Sunday 05 April 2026 01:03:35 +0000 (0:00:00.877) 0:00:04.908 ********** 2026-04-05 01:04:29.311751 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-05 01:04:29.311765 | orchestrator | ok: [testbed-manager] 2026-04-05 01:04:29.311781 | orchestrator | 2026-04-05 01:04:29.311796 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-05 01:04:29.311819 | orchestrator | Sunday 05 April 2026 01:04:19 +0000 (0:00:43.871) 0:00:48.780 ********** 2026-04-05 01:04:29.311835 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-05 01:04:29.311849 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-05 01:04:29.311864 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-05 01:04:29.311880 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:04:29.311895 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-05 01:04:29.311911 | orchestrator | 2026-04-05 01:04:29.311944 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-05 01:04:29.311958 | orchestrator | Sunday 05 April 2026 01:04:23 +0000 (0:00:04.429) 0:00:53.210 ********** 2026-04-05 01:04:29.311973 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-05 01:04:29.311988 | orchestrator | 2026-04-05 01:04:29.312004 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-05 01:04:29.312018 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:00.613) 0:00:53.824 ********** 2026-04-05 01:04:29.312033 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:04:29.312048 | orchestrator | 2026-04-05 01:04:29.312062 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-05 01:04:29.312077 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:00.135) 0:00:53.959 ********** 2026-04-05 01:04:29.312092 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:04:29.312108 | orchestrator | 2026-04-05 01:04:29.312122 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-05 01:04:29.312137 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:00.315) 0:00:54.275 ********** 2026-04-05 01:04:29.312151 | orchestrator | changed: [testbed-manager] 2026-04-05 01:04:29.312166 | orchestrator | 2026-04-05 01:04:29.312181 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-05 01:04:29.312195 | orchestrator | Sunday 05 April 2026 01:04:25 +0000 (0:00:01.486) 0:00:55.762 ********** 2026-04-05 01:04:29.312210 | orchestrator | changed: [testbed-manager] 2026-04-05 01:04:29.312225 | orchestrator | 2026-04-05 01:04:29.312239 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-05 01:04:29.312253 | orchestrator | Sunday 05 April 2026 01:04:26 +0000 (0:00:00.789) 0:00:56.551 ********** 2026-04-05 01:04:29.312276 | orchestrator | changed: [testbed-manager] 2026-04-05 01:04:29.312292 | orchestrator | 2026-04-05 01:04:29.312307 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-05 01:04:29.312321 | orchestrator | Sunday 05 April 2026 01:04:27 +0000 (0:00:00.624) 0:00:57.175 ********** 2026-04-05 01:04:29.312335 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-05 01:04:29.312350 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-05 01:04:29.312365 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:04:29.312380 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-05 01:04:29.312395 | orchestrator | 2026-04-05 01:04:29.312410 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:04:29.312425 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:04:29.312439 | orchestrator | 2026-04-05 01:04:29.312454 | orchestrator | 2026-04-05 01:04:29.312469 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:04:29.312484 | orchestrator | Sunday 05 April 2026 01:04:28 +0000 (0:00:01.552) 0:00:58.728 ********** 2026-04-05 01:04:29.312499 | orchestrator | =============================================================================== 2026-04-05 01:04:29.312513 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.87s 2026-04-05 01:04:29.312527 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.43s 2026-04-05 01:04:29.312542 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.55s 2026-04-05 01:04:29.312557 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.51s 2026-04-05 01:04:29.312571 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.49s 2026-04-05 01:04:29.312586 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2026-04-05 01:04:29.312601 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2026-04-05 01:04:29.312616 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.86s 2026-04-05 01:04:29.312630 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-04-05 01:04:29.312645 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-04-05 01:04:29.312665 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.61s 2026-04-05 01:04:29.312680 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2026-04-05 01:04:29.312694 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-04-05 01:04:29.312708 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-05 01:04:29.312723 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task 19769385-d9d2-4159-b307-6552855cacdc is in state SUCCESS 2026-04-05 01:04:29.312738 | orchestrator | 2026-04-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:32.353232 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:32.354677 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:32.356652 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:32.359602 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:32.359662 | orchestrator | 2026-04-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:35.402161 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:35.403178 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:35.403636 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:35.404366 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:35.405163 | orchestrator | 2026-04-05 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:38.466763 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:38.469201 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:38.470170 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:38.472632 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:38.472775 | orchestrator | 2026-04-05 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:41.526374 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:41.526468 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:41.526480 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:41.526489 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:41.526498 | orchestrator | 2026-04-05 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:44.574544 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:44.579745 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:44.579809 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:44.579820 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:44.579830 | orchestrator | 2026-04-05 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:47.614771 | orchestrator | 2026-04-05 01:04:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:47.615189 | orchestrator | 2026-04-05 01:04:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:47.615785 | orchestrator | 2026-04-05 01:04:47 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:47.616387 | orchestrator | 2026-04-05 01:04:47 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:47.616409 | orchestrator | 2026-04-05 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:50.659080 | orchestrator | 2026-04-05 01:04:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:50.661099 | orchestrator | 2026-04-05 01:04:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:50.663389 | orchestrator | 2026-04-05 01:04:50 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:50.665395 | orchestrator | 2026-04-05 01:04:50 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:50.665525 | orchestrator | 2026-04-05 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:53.696741 | orchestrator | 2026-04-05 01:04:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:53.697497 | orchestrator | 2026-04-05 01:04:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:53.698545 | orchestrator | 2026-04-05 01:04:53 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:53.701994 | orchestrator | 2026-04-05 01:04:53 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:53.702066 | orchestrator | 2026-04-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:56.738887 | orchestrator | 2026-04-05 01:04:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:56.741233 | orchestrator | 2026-04-05 01:04:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:56.744483 | orchestrator | 2026-04-05 01:04:56 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:56.747002 | orchestrator | 2026-04-05 01:04:56 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:56.747482 | orchestrator | 2026-04-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:59.790442 | orchestrator | 2026-04-05 01:04:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:04:59.790632 | orchestrator | 2026-04-05 01:04:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:04:59.790661 | orchestrator | 2026-04-05 01:04:59 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:04:59.792723 | orchestrator | 2026-04-05 01:04:59 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:04:59.792812 | orchestrator | 2026-04-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:02.825742 | orchestrator | 2026-04-05 01:05:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:02.828026 | orchestrator | 2026-04-05 01:05:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:02.831783 | orchestrator | 2026-04-05 01:05:02 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:02.832812 | orchestrator | 2026-04-05 01:05:02 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:02.832882 | orchestrator | 2026-04-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:05.900624 | orchestrator | 2026-04-05 01:05:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:05.901405 | orchestrator | 2026-04-05 01:05:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:05.903665 | orchestrator | 2026-04-05 01:05:05 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:05.904556 | orchestrator | 2026-04-05 01:05:05 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:05.904605 | orchestrator | 2026-04-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:08.959394 | orchestrator | 2026-04-05 01:05:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:08.961155 | orchestrator | 2026-04-05 01:05:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:08.963136 | orchestrator | 2026-04-05 01:05:08 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:08.964852 | orchestrator | 2026-04-05 01:05:08 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:08.964897 | orchestrator | 2026-04-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:12.008245 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:12.008788 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:12.009594 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:12.010692 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:12.010735 | orchestrator | 2026-04-05 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:15.058292 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:15.060102 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:15.061769 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:15.063307 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:15.063634 | orchestrator | 2026-04-05 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:18.109279 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:18.109962 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:18.111729 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:18.115261 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:18.115335 | orchestrator | 2026-04-05 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:21.164177 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:21.168331 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:21.172307 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:21.175692 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:21.177322 | orchestrator | 2026-04-05 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:24.210884 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:24.211634 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:24.214985 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:24.215953 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:24.215990 | orchestrator | 2026-04-05 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:27.251950 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:27.252821 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:27.254594 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:27.254659 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:27.254679 | orchestrator | 2026-04-05 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:30.297570 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:30.298156 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:30.299527 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:30.299861 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:30.299900 | orchestrator | 2026-04-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:33.338836 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:33.339222 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:33.342583 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:33.342624 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:33.342636 | orchestrator | 2026-04-05 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:36.390352 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:36.392071 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:36.394569 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:36.395857 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:36.396227 | orchestrator | 2026-04-05 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:39.440390 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:39.440500 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:39.441594 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:39.442287 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:39.442339 | orchestrator | 2026-04-05 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:42.485279 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:42.485561 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:42.486668 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:42.487456 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:42.487516 | orchestrator | 2026-04-05 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:45.543043 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:45.544464 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:45.546172 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state STARTED 2026-04-05 01:05:45.547788 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state STARTED 2026-04-05 01:05:45.547821 | orchestrator | 2026-04-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:48.593146 | orchestrator | 2026-04-05 01:05:48 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:05:48.595167 | orchestrator | 2026-04-05 01:05:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:48.597203 | orchestrator | 2026-04-05 01:05:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:48.599408 | orchestrator | 2026-04-05 01:05:48 | INFO  | Task 41c531e2-75ea-4236-b80a-f76ec29cfa94 is in state SUCCESS 2026-04-05 01:05:48.603715 | orchestrator | 2026-04-05 01:05:48 | INFO  | Task 0def1077-3ef0-45bc-b799-f7d3653d7d44 is in state SUCCESS 2026-04-05 01:05:48.604295 | orchestrator | 2026-04-05 01:05:48.604362 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:05:48.604500 | orchestrator | 2.16.14 2026-04-05 01:05:48.604523 | orchestrator | 2026-04-05 01:05:48.604543 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-05 01:05:48.604563 | orchestrator | 2026-04-05 01:05:48.604582 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-05 01:05:48.604649 | orchestrator | Sunday 05 April 2026 01:04:33 +0000 (0:00:00.220) 0:00:00.220 ********** 2026-04-05 01:05:48.604801 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.604824 | orchestrator | 2026-04-05 01:05:48.604865 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-05 01:05:48.604886 | orchestrator | Sunday 05 April 2026 01:04:35 +0000 (0:00:01.873) 0:00:02.093 ********** 2026-04-05 01:05:48.604904 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.604923 | orchestrator | 2026-04-05 01:05:48.604941 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-05 01:05:48.604961 | orchestrator | Sunday 05 April 2026 01:04:36 +0000 (0:00:01.103) 0:00:03.197 ********** 2026-04-05 01:05:48.604979 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605229 | orchestrator | 2026-04-05 01:05:48.605243 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-05 01:05:48.605267 | orchestrator | Sunday 05 April 2026 01:04:37 +0000 (0:00:01.196) 0:00:04.394 ********** 2026-04-05 01:05:48.605279 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605309 | orchestrator | 2026-04-05 01:05:48.605321 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-05 01:05:48.605332 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:01.183) 0:00:05.578 ********** 2026-04-05 01:05:48.605342 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605380 | orchestrator | 2026-04-05 01:05:48.605392 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-05 01:05:48.605434 | orchestrator | Sunday 05 April 2026 01:04:39 +0000 (0:00:01.022) 0:00:06.600 ********** 2026-04-05 01:05:48.605456 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605475 | orchestrator | 2026-04-05 01:05:48.605493 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-05 01:05:48.605513 | orchestrator | Sunday 05 April 2026 01:04:41 +0000 (0:00:01.279) 0:00:07.879 ********** 2026-04-05 01:05:48.605539 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605560 | orchestrator | 2026-04-05 01:05:48.605607 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-05 01:05:48.605627 | orchestrator | Sunday 05 April 2026 01:04:43 +0000 (0:00:02.263) 0:00:10.143 ********** 2026-04-05 01:05:48.605645 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605662 | orchestrator | 2026-04-05 01:05:48.605706 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-05 01:05:48.605725 | orchestrator | Sunday 05 April 2026 01:04:44 +0000 (0:00:01.390) 0:00:11.534 ********** 2026-04-05 01:05:48.605803 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:48.605822 | orchestrator | 2026-04-05 01:05:48.605842 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-05 01:05:48.605861 | orchestrator | Sunday 05 April 2026 01:05:22 +0000 (0:00:37.413) 0:00:48.948 ********** 2026-04-05 01:05:48.605878 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.605889 | orchestrator | 2026-04-05 01:05:48.605903 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:48.605920 | orchestrator | 2026-04-05 01:05:48.605931 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:48.605942 | orchestrator | Sunday 05 April 2026 01:05:22 +0000 (0:00:00.145) 0:00:49.094 ********** 2026-04-05 01:05:48.605952 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:48.605963 | orchestrator | 2026-04-05 01:05:48.605974 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:48.605985 | orchestrator | 2026-04-05 01:05:48.605996 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:48.606006 | orchestrator | Sunday 05 April 2026 01:05:34 +0000 (0:00:11.754) 0:01:00.848 ********** 2026-04-05 01:05:48.606153 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:48.606174 | orchestrator | 2026-04-05 01:05:48.606185 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:05:48.606196 | orchestrator | 2026-04-05 01:05:48.606207 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:05:48.606218 | orchestrator | Sunday 05 April 2026 01:05:35 +0000 (0:00:01.390) 0:01:02.239 ********** 2026-04-05 01:05:48.606229 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:48.606240 | orchestrator | 2026-04-05 01:05:48.606251 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:48.606263 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 01:05:48.606275 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:48.606286 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:48.606297 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:05:48.606308 | orchestrator | 2026-04-05 01:05:48.606319 | orchestrator | 2026-04-05 01:05:48.606330 | orchestrator | 2026-04-05 01:05:48.606341 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:48.606352 | orchestrator | Sunday 05 April 2026 01:05:46 +0000 (0:00:11.462) 0:01:13.701 ********** 2026-04-05 01:05:48.606363 | orchestrator | =============================================================================== 2026-04-05 01:05:48.606374 | orchestrator | Create admin user ------------------------------------------------------ 37.41s 2026-04-05 01:05:48.606403 | orchestrator | Restart ceph manager service ------------------------------------------- 24.61s 2026-04-05 01:05:48.606414 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.26s 2026-04-05 01:05:48.606425 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.87s 2026-04-05 01:05:48.606436 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.39s 2026-04-05 01:05:48.606459 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.28s 2026-04-05 01:05:48.606470 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.20s 2026-04-05 01:05:48.606481 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.18s 2026-04-05 01:05:48.606492 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2026-04-05 01:05:48.606502 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2026-04-05 01:05:48.606513 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-04-05 01:05:48.606524 | orchestrator | 2026-04-05 01:05:48.606912 | orchestrator | 2026-04-05 01:05:48.606999 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:48.607016 | orchestrator | 2026-04-05 01:05:48.607046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:48.607110 | orchestrator | Sunday 05 April 2026 01:04:32 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-04-05 01:05:48.607139 | orchestrator | ok: [testbed-manager] 2026-04-05 01:05:48.607163 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:48.607181 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:48.607201 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:48.607221 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:05:48.607240 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:05:48.607259 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:05:48.607271 | orchestrator | 2026-04-05 01:05:48.607283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:48.607294 | orchestrator | Sunday 05 April 2026 01:04:33 +0000 (0:00:00.757) 0:00:01.058 ********** 2026-04-05 01:05:48.607305 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607326 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607342 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607361 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607378 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607389 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607400 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-05 01:05:48.607411 | orchestrator | 2026-04-05 01:05:48.607422 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-05 01:05:48.607434 | orchestrator | 2026-04-05 01:05:48.607445 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:05:48.607458 | orchestrator | Sunday 05 April 2026 01:04:33 +0000 (0:00:00.893) 0:00:01.951 ********** 2026-04-05 01:05:48.607475 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:05:48.607494 | orchestrator | 2026-04-05 01:05:48.607509 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-05 01:05:48.607522 | orchestrator | Sunday 05 April 2026 01:04:35 +0000 (0:00:01.163) 0:00:03.115 ********** 2026-04-05 01:05:48.607539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:05:48.607605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.607869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.607881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.607893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.607912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.607924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.607945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.607977 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:05:48.608002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608183 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608228 | orchestrator | 2026-04-05 01:05:48.608240 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:05:48.608251 | orchestrator | Sunday 05 April 2026 01:04:38 +0000 (0:00:03.364) 0:00:06.480 ********** 2026-04-05 01:05:48.608263 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:05:48.608274 | orchestrator | 2026-04-05 01:05:48.608285 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-05 01:05:48.608296 | orchestrator | Sunday 05 April 2026 01:04:40 +0000 (0:00:01.635) 0:00:08.115 ********** 2026-04-05 01:05:48.608308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:05:48.608359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.608458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608535 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:05:48.608618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.608700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608739 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.608802 | orchestrator | 2026-04-05 01:05:48.608823 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-05 01:05:48.608836 | orchestrator | Sunday 05 April 2026 01:04:46 +0000 (0:00:06.307) 0:00:14.422 ********** 2026-04-05 01:05:48.608849 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:05:48.608878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.608891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.608911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.608923 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.608935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.608947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.608959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.608971 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.608994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609037 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.609050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:05:48.609104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609165 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.609177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609224 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609235 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.609247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609258 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.609275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609310 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.609322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609357 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.609369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609392 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.609404 | orchestrator | 2026-04-05 01:05:48.609415 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-05 01:05:48.609427 | orchestrator | Sunday 05 April 2026 01:04:48 +0000 (0:00:01.884) 0:00:16.307 ********** 2026-04-05 01:05:48.609445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:05:48.609597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609733 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609795 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.609809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.609820 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:05:48.609883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.609906 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.609917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.609929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.609985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.610002 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.610081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610123 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.610135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610146 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.610158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.610177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.610200 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.610212 | orchestrator | 2026-04-05 01:05:48.610230 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-05 01:05:48.610242 | orchestrator | Sunday 05 April 2026 01:04:51 +0000 (0:00:02.747) 0:00:19.054 ********** 2026-04-05 01:05:48.610259 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:05:48.610272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610395 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.610406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610494 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.610611 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:05:48.610630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.610742 | orchestrator | 2026-04-05 01:05:48.610760 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-05 01:05:48.610778 | orchestrator | Sunday 05 April 2026 01:04:56 +0000 (0:00:05.481) 0:00:24.536 ********** 2026-04-05 01:05:48.610797 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:48.610815 | orchestrator | 2026-04-05 01:05:48.610834 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-05 01:05:48.610853 | orchestrator | Sunday 05 April 2026 01:04:57 +0000 (0:00:00.879) 0:00:25.415 ********** 2026-04-05 01:05:48.610871 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.610889 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.610907 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.610926 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.610946 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.610966 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.610985 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.611005 | orchestrator | 2026-04-05 01:05:48.611025 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-05 01:05:48.611087 | orchestrator | Sunday 05 April 2026 01:04:58 +0000 (0:00:00.777) 0:00:26.192 ********** 2026-04-05 01:05:48.611109 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:48.611122 | orchestrator | 2026-04-05 01:05:48.611132 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-05 01:05:48.611143 | orchestrator | Sunday 05 April 2026 01:04:58 +0000 (0:00:00.763) 0:00:26.955 ********** 2026-04-05 01:05:48.611154 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611179 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611200 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611211 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:48.611222 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611244 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611266 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611276 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:05:48.611287 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611298 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611309 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611320 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611330 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611341 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 01:05:48.611352 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611363 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611374 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611396 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611407 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:05:48.611417 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611439 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611460 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611471 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 01:05:48.611482 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611504 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611525 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611536 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:05:48.611547 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.611558 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611569 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:48.611580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:48.611599 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-05 01:05:48.611611 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:05:48.611622 | orchestrator | 2026-04-05 01:05:48.611633 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-05 01:05:48.611653 | orchestrator | Sunday 05 April 2026 01:05:00 +0000 (0:00:01.620) 0:00:28.576 ********** 2026-04-05 01:05:48.611665 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611682 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.611694 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611705 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.611716 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611727 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.611738 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611749 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.611760 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.611782 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:48.611793 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.611805 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-05 01:05:48.611816 | orchestrator | 2026-04-05 01:05:48.611826 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-05 01:05:48.611837 | orchestrator | Sunday 05 April 2026 01:05:15 +0000 (0:00:15.275) 0:00:43.852 ********** 2026-04-05 01:05:48.611848 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.611859 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.611870 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.611881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.611892 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.611908 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.611932 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.611957 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.611974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.611992 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.612010 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:48.612028 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.612046 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-05 01:05:48.612096 | orchestrator | 2026-04-05 01:05:48.612113 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-05 01:05:48.612131 | orchestrator | Sunday 05 April 2026 01:05:18 +0000 (0:00:03.054) 0:00:46.906 ********** 2026-04-05 01:05:48.612150 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612169 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.612190 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612209 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612245 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.612266 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.612287 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.612310 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612321 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.612332 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-05 01:05:48.612343 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:48.612354 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.612365 | orchestrator | 2026-04-05 01:05:48.612376 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-05 01:05:48.612387 | orchestrator | Sunday 05 April 2026 01:05:20 +0000 (0:00:01.388) 0:00:48.295 ********** 2026-04-05 01:05:48.612398 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:48.612409 | orchestrator | 2026-04-05 01:05:48.612420 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-05 01:05:48.612431 | orchestrator | Sunday 05 April 2026 01:05:21 +0000 (0:00:00.731) 0:00:49.026 ********** 2026-04-05 01:05:48.612442 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.612453 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.612464 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.612475 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.612486 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.612497 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.612518 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.612530 | orchestrator | 2026-04-05 01:05:48.612541 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-05 01:05:48.612558 | orchestrator | Sunday 05 April 2026 01:05:21 +0000 (0:00:00.756) 0:00:49.783 ********** 2026-04-05 01:05:48.612570 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.612581 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.612592 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.612603 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.612613 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:48.612624 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:48.612635 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:48.612646 | orchestrator | 2026-04-05 01:05:48.612656 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-05 01:05:48.612676 | orchestrator | Sunday 05 April 2026 01:05:23 +0000 (0:00:02.136) 0:00:51.920 ********** 2026-04-05 01:05:48.612701 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612724 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.612741 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.612779 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612798 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.612817 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612837 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.612849 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612860 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.612871 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612892 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.612902 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:48.612913 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.612924 | orchestrator | 2026-04-05 01:05:48.612935 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-05 01:05:48.612946 | orchestrator | Sunday 05 April 2026 01:05:25 +0000 (0:00:01.311) 0:00:53.232 ********** 2026-04-05 01:05:48.612957 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.612968 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.612979 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.612990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.613001 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.613012 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.613023 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.613034 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.613044 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.613056 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.613236 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-05 01:05:48.613254 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:48.613265 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.613276 | orchestrator | 2026-04-05 01:05:48.613288 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-05 01:05:48.613299 | orchestrator | Sunday 05 April 2026 01:05:26 +0000 (0:00:01.531) 0:00:54.763 ********** 2026-04-05 01:05:48.613310 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:48.613321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-05 01:05:48.613332 | orchestrator | due to this access issue: 2026-04-05 01:05:48.613343 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-05 01:05:48.613354 | orchestrator | not a directory 2026-04-05 01:05:48.613363 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:48.613373 | orchestrator | 2026-04-05 01:05:48.613383 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-05 01:05:48.613392 | orchestrator | Sunday 05 April 2026 01:05:27 +0000 (0:00:01.106) 0:00:55.870 ********** 2026-04-05 01:05:48.613402 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.613412 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.613421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.613431 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.613440 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.613450 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.613460 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.613469 | orchestrator | 2026-04-05 01:05:48.613479 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-05 01:05:48.613489 | orchestrator | Sunday 05 April 2026 01:05:28 +0000 (0:00:00.638) 0:00:56.508 ********** 2026-04-05 01:05:48.613498 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.613508 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.613517 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.613527 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.613537 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.613570 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.613581 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.613590 | orchestrator | 2026-04-05 01:05:48.613600 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-05 01:05:48.613617 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:00.726) 0:00:57.235 ********** 2026-04-05 01:05:48.613628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613642 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 01:05:48.613666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613745 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:48.613766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.613921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613976 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:05:48.613987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.613997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.614007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.614082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:48.614115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.614156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:48.614174 | orchestrator | 2026-04-05 01:05:48.614184 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-05 01:05:48.614194 | orchestrator | Sunday 05 April 2026 01:05:33 +0000 (0:00:03.922) 0:01:01.158 ********** 2026-04-05 01:05:48.614204 | orchestrator | changed: [testbed-manager] => { 2026-04-05 01:05:48.614214 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614224 | orchestrator | } 2026-04-05 01:05:48.614235 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:05:48.614246 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614257 | orchestrator | } 2026-04-05 01:05:48.614267 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:05:48.614278 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614289 | orchestrator | } 2026-04-05 01:05:48.614300 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:05:48.614311 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614322 | orchestrator | } 2026-04-05 01:05:48.614332 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 01:05:48.614343 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614354 | orchestrator | } 2026-04-05 01:05:48.614365 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 01:05:48.614376 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614386 | orchestrator | } 2026-04-05 01:05:48.614397 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 01:05:48.614408 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:05:48.614419 | orchestrator | } 2026-04-05 01:05:48.614430 | orchestrator | 2026-04-05 01:05:48.614441 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:05:48.614451 | orchestrator | Sunday 05 April 2026 01:05:33 +0000 (0:00:00.776) 0:01:01.934 ********** 2026-04-05 01:05:48.614463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 01:05:48.614495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614536 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614593 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:05:48.614605 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614617 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:48.614640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:48.614775 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.614786 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:48.614797 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:48.614808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614849 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:48.614860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614905 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:48.614917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:48.614928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:48.614957 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:48.614968 | orchestrator | 2026-04-05 01:05:48.614979 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-05 01:05:48.614990 | orchestrator | Sunday 05 April 2026 01:05:35 +0000 (0:00:01.847) 0:01:03.782 ********** 2026-04-05 01:05:48.615001 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 01:05:48.615013 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:48.615023 | orchestrator | 2026-04-05 01:05:48.615034 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615045 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:01.241) 0:01:05.024 ********** 2026-04-05 01:05:48.615056 | orchestrator | 2026-04-05 01:05:48.615091 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615103 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.067) 0:01:05.092 ********** 2026-04-05 01:05:48.615114 | orchestrator | 2026-04-05 01:05:48.615125 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615136 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.297) 0:01:05.389 ********** 2026-04-05 01:05:48.615147 | orchestrator | 2026-04-05 01:05:48.615158 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615169 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.089) 0:01:05.478 ********** 2026-04-05 01:05:48.615180 | orchestrator | 2026-04-05 01:05:48.615192 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615203 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.062) 0:01:05.541 ********** 2026-04-05 01:05:48.615213 | orchestrator | 2026-04-05 01:05:48.615225 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615235 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.067) 0:01:05.609 ********** 2026-04-05 01:05:48.615246 | orchestrator | 2026-04-05 01:05:48.615257 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:48.615268 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.067) 0:01:05.676 ********** 2026-04-05 01:05:48.615293 | orchestrator | 2026-04-05 01:05:48.615304 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-05 01:05:48.615315 | orchestrator | Sunday 05 April 2026 01:05:37 +0000 (0:00:00.091) 0:01:05.768 ********** 2026-04-05 01:05:48.615342 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_co6233fa/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_co6233fa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_co6233fa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_co6233fa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615366 | orchestrator | 2026-04-05 01:05:48.615378 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-05 01:05:48.615389 | orchestrator | Sunday 05 April 2026 01:05:40 +0000 (0:00:02.622) 0:01:08.390 ********** 2026-04-05 01:05:48.615414 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_juzdssss/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_juzdssss/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_juzdssss/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_juzdssss/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615435 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kq_9olzj/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kq_9olzj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_kq_9olzj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kq_9olzj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615462 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dym6xq4e/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dym6xq4e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_dym6xq4e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dym6xq4e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615488 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fley7as8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fley7as8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_fley7as8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fley7as8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615524 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gex1o5vf/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gex1o5vf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gex1o5vf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gex1o5vf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615558 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_totj17jh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_totj17jh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_totj17jh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_totj17jh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-05 01:05:48.615577 | orchestrator | 2026-04-05 01:05:48.615589 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:48.615601 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-05 01:05:48.615613 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-05 01:05:48.615624 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-05 01:05:48.615635 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-05 01:05:48.615646 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-05 01:05:48.615657 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-05 01:05:48.615668 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-05 01:05:48.615679 | orchestrator | 2026-04-05 01:05:48.615690 | orchestrator | 2026-04-05 01:05:48.615701 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:48.615712 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:05.053) 0:01:13.444 ********** 2026-04-05 01:05:48.615723 | orchestrator | =============================================================================== 2026-04-05 01:05:48.615734 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.28s 2026-04-05 01:05:48.615745 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.31s 2026-04-05 01:05:48.615755 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.48s 2026-04-05 01:05:48.615767 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 5.05s 2026-04-05 01:05:48.615777 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 3.92s 2026-04-05 01:05:48.615789 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.36s 2026-04-05 01:05:48.615800 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.05s 2026-04-05 01:05:48.615811 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.75s 2026-04-05 01:05:48.615821 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.62s 2026-04-05 01:05:48.615832 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.14s 2026-04-05 01:05:48.615843 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.88s 2026-04-05 01:05:48.615854 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.85s 2026-04-05 01:05:48.615865 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.64s 2026-04-05 01:05:48.615875 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.62s 2026-04-05 01:05:48.615886 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.53s 2026-04-05 01:05:48.615897 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.39s 2026-04-05 01:05:48.615914 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.31s 2026-04-05 01:05:48.615925 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.24s 2026-04-05 01:05:48.615936 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.16s 2026-04-05 01:05:48.615947 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.11s 2026-04-05 01:05:48.615958 | orchestrator | 2026-04-05 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:51.653335 | orchestrator | 2026-04-05 01:05:51 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:05:51.654945 | orchestrator | 2026-04-05 01:05:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:51.656740 | orchestrator | 2026-04-05 01:05:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:51.657028 | orchestrator | 2026-04-05 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:54.702307 | orchestrator | 2026-04-05 01:05:54 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:05:54.702726 | orchestrator | 2026-04-05 01:05:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:54.706361 | orchestrator | 2026-04-05 01:05:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:54.706415 | orchestrator | 2026-04-05 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:57.751646 | orchestrator | 2026-04-05 01:05:57 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:05:57.752862 | orchestrator | 2026-04-05 01:05:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:05:57.754860 | orchestrator | 2026-04-05 01:05:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:05:57.754922 | orchestrator | 2026-04-05 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:00.795380 | orchestrator | 2026-04-05 01:06:00 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:06:00.799047 | orchestrator | 2026-04-05 01:06:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:00.801270 | orchestrator | 2026-04-05 01:06:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:00.801332 | orchestrator | 2026-04-05 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:03.840288 | orchestrator | 2026-04-05 01:06:03 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state STARTED 2026-04-05 01:06:03.842456 | orchestrator | 2026-04-05 01:06:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:03.843616 | orchestrator | 2026-04-05 01:06:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:03.843651 | orchestrator | 2026-04-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:06.893171 | orchestrator | 2026-04-05 01:06:06.893293 | orchestrator | 2026-04-05 01:06:06.893319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:06:06.893340 | orchestrator | 2026-04-05 01:06:06.893359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:06:06.893381 | orchestrator | Sunday 05 April 2026 01:05:49 +0000 (0:00:00.280) 0:00:00.280 ********** 2026-04-05 01:06:06.893401 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:06:06.893423 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:06:06.893445 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:06:06.893458 | orchestrator | 2026-04-05 01:06:06.893470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:06:06.893510 | orchestrator | Sunday 05 April 2026 01:05:49 +0000 (0:00:00.272) 0:00:00.553 ********** 2026-04-05 01:06:06.893522 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-05 01:06:06.893533 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-05 01:06:06.893544 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-05 01:06:06.893555 | orchestrator | 2026-04-05 01:06:06.893569 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-05 01:06:06.893582 | orchestrator | 2026-04-05 01:06:06.893596 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 01:06:06.893610 | orchestrator | Sunday 05 April 2026 01:05:49 +0000 (0:00:00.270) 0:00:00.823 ********** 2026-04-05 01:06:06.893624 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:06:06.893638 | orchestrator | 2026-04-05 01:06:06.893652 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-05 01:06:06.893665 | orchestrator | Sunday 05 April 2026 01:05:50 +0000 (0:00:00.600) 0:00:01.423 ********** 2026-04-05 01:06:06.893682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893742 | orchestrator | 2026-04-05 01:06:06.893756 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-05 01:06:06.893769 | orchestrator | Sunday 05 April 2026 01:05:51 +0000 (0:00:01.055) 0:00:02.479 ********** 2026-04-05 01:06:06.893782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:06:06.893796 | orchestrator | 2026-04-05 01:06:06.893809 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 01:06:06.893834 | orchestrator | Sunday 05 April 2026 01:05:52 +0000 (0:00:00.824) 0:00:03.304 ********** 2026-04-05 01:06:06.893848 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:06:06.893861 | orchestrator | 2026-04-05 01:06:06.893874 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-05 01:06:06.893908 | orchestrator | Sunday 05 April 2026 01:05:52 +0000 (0:00:00.573) 0:00:03.877 ********** 2026-04-05 01:06:06.893923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.893987 | orchestrator | 2026-04-05 01:06:06.894004 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-05 01:06:06.894180 | orchestrator | Sunday 05 April 2026 01:05:54 +0000 (0:00:01.337) 0:00:05.215 ********** 2026-04-05 01:06:06.894210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:06.894249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894276 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:06.894300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894329 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:06.894381 | orchestrator | 2026-04-05 01:06:06.894400 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-05 01:06:06.894415 | orchestrator | Sunday 05 April 2026 01:05:54 +0000 (0:00:00.433) 0:00:05.648 ********** 2026-04-05 01:06:06.894433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894453 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:06.894482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894503 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:06.894522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.894552 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:06.894563 | orchestrator | 2026-04-05 01:06:06.894575 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-05 01:06:06.894585 | orchestrator | Sunday 05 April 2026 01:05:55 +0000 (0:00:00.585) 0:00:06.233 ********** 2026-04-05 01:06:06.894609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894645 | orchestrator | 2026-04-05 01:06:06.894656 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-05 01:06:06.894666 | orchestrator | Sunday 05 April 2026 01:05:56 +0000 (0:00:01.212) 0:00:07.445 ********** 2026-04-05 01:06:06.894683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.894733 | orchestrator | 2026-04-05 01:06:06.894744 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-05 01:06:06.894755 | orchestrator | Sunday 05 April 2026 01:05:57 +0000 (0:00:01.446) 0:00:08.892 ********** 2026-04-05 01:06:06.894766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:06.894777 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:06.894788 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:06.894799 | orchestrator | 2026-04-05 01:06:06.894809 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-05 01:06:06.894820 | orchestrator | Sunday 05 April 2026 01:05:58 +0000 (0:00:00.258) 0:00:09.151 ********** 2026-04-05 01:06:06.894831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:06:06.894843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:06:06.894853 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:06:06.894864 | orchestrator | 2026-04-05 01:06:06.894875 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-05 01:06:06.894885 | orchestrator | Sunday 05 April 2026 01:05:59 +0000 (0:00:01.163) 0:00:10.315 ********** 2026-04-05 01:06:06.894897 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:06:06.894908 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:06:06.894919 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:06:06.894929 | orchestrator | 2026-04-05 01:06:06.894940 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-05 01:06:06.894951 | orchestrator | Sunday 05 April 2026 01:06:00 +0000 (0:00:01.120) 0:00:11.435 ********** 2026-04-05 01:06:06.894962 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:06:06.894972 | orchestrator | 2026-04-05 01:06:06.894983 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-05 01:06:06.894994 | orchestrator | Sunday 05 April 2026 01:06:01 +0000 (0:00:00.678) 0:00:12.114 ********** 2026-04-05 01:06:06.895005 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:06:06.895016 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:06:06.895033 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:06:06.895044 | orchestrator | 2026-04-05 01:06:06.895055 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-05 01:06:06.895065 | orchestrator | Sunday 05 April 2026 01:06:01 +0000 (0:00:00.823) 0:00:12.938 ********** 2026-04-05 01:06:06.895076 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:06.895087 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:06.895144 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:06.895155 | orchestrator | 2026-04-05 01:06:06.895172 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-05 01:06:06.895183 | orchestrator | Sunday 05 April 2026 01:06:03 +0000 (0:00:01.187) 0:00:14.125 ********** 2026-04-05 01:06:06.895196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.895216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.895255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 01:06:06.895275 | orchestrator | 2026-04-05 01:06:06.895293 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-05 01:06:06.895311 | orchestrator | Sunday 05 April 2026 01:06:04 +0000 (0:00:01.053) 0:00:15.178 ********** 2026-04-05 01:06:06.895328 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 01:06:06.895343 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:06:06.895361 | orchestrator | } 2026-04-05 01:06:06.895380 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 01:06:06.895399 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:06:06.895417 | orchestrator | } 2026-04-05 01:06:06.895437 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 01:06:06.895456 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 01:06:06.895474 | orchestrator | } 2026-04-05 01:06:06.895492 | orchestrator | 2026-04-05 01:06:06.895510 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 01:06:06.895541 | orchestrator | Sunday 05 April 2026 01:06:04 +0000 (0:00:00.332) 0:00:15.511 ********** 2026-04-05 01:06:06.895561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.895582 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:06.895624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.895644 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:06.895660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 01:06:06.895671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:06.895682 | orchestrator | 2026-04-05 01:06:06.895693 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-05 01:06:06.895703 | orchestrator | Sunday 05 April 2026 01:06:05 +0000 (0:00:00.835) 0:00:16.347 ********** 2026-04-05 01:06:06.895714 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-05 01:06:06.895725 | orchestrator | 2026-04-05 01:06:06.895736 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:06:06.895757 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-05 01:06:06.895770 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:06:06.895782 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:06:06.895793 | orchestrator | 2026-04-05 01:06:06.895804 | orchestrator | 2026-04-05 01:06:06.895815 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:06:06.895825 | orchestrator | Sunday 05 April 2026 01:06:06 +0000 (0:00:00.736) 0:00:17.083 ********** 2026-04-05 01:06:06.895844 | orchestrator | =============================================================================== 2026-04-05 01:06:06.895855 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2026-04-05 01:06:06.895866 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-04-05 01:06:06.895876 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2026-04-05 01:06:06.895887 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.19s 2026-04-05 01:06:06.895898 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.16s 2026-04-05 01:06:06.895908 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.12s 2026-04-05 01:06:06.895919 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.06s 2026-04-05 01:06:06.895930 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.05s 2026-04-05 01:06:06.895940 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.84s 2026-04-05 01:06:06.895951 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2026-04-05 01:06:06.895962 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.82s 2026-04-05 01:06:06.895972 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.74s 2026-04-05 01:06:06.895983 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.68s 2026-04-05 01:06:06.895993 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2026-04-05 01:06:06.896004 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.59s 2026-04-05 01:06:06.896015 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.57s 2026-04-05 01:06:06.896025 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.43s 2026-04-05 01:06:06.896036 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.33s 2026-04-05 01:06:06.896046 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-05 01:06:06.896063 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-05 01:06:06.896073 | orchestrator | 2026-04-05 01:06:06 | INFO  | Task b5c9e076-3d9e-4365-a6ef-5b6a720b26ac is in state SUCCESS 2026-04-05 01:06:06.896085 | orchestrator | 2026-04-05 01:06:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:06.896695 | orchestrator | 2026-04-05 01:06:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:06.896729 | orchestrator | 2026-04-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:09.952577 | orchestrator | 2026-04-05 01:06:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:09.953282 | orchestrator | 2026-04-05 01:06:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:09.953313 | orchestrator | 2026-04-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:13.009406 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:13.010569 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:13.010627 | orchestrator | 2026-04-05 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:16.061484 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:16.064214 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:16.064296 | orchestrator | 2026-04-05 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:19.112170 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:19.114709 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:19.114753 | orchestrator | 2026-04-05 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:22.162269 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:22.163383 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:22.163401 | orchestrator | 2026-04-05 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:25.206881 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:25.208041 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:25.208063 | orchestrator | 2026-04-05 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:28.262275 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:28.263610 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:28.263902 | orchestrator | 2026-04-05 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:31.303119 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:31.304762 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:31.304819 | orchestrator | 2026-04-05 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:34.357303 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:34.359693 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:34.359745 | orchestrator | 2026-04-05 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:37.409483 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:37.410533 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:37.410631 | orchestrator | 2026-04-05 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:40.457822 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:40.460372 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:40.460492 | orchestrator | 2026-04-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:43.508579 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:43.510489 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:43.510556 | orchestrator | 2026-04-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:46.553137 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:46.555207 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:46.555607 | orchestrator | 2026-04-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:49.597276 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:49.598090 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:49.598280 | orchestrator | 2026-04-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:52.640122 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:52.641226 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:52.641255 | orchestrator | 2026-04-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:55.676906 | orchestrator | 2026-04-05 01:06:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:55.678740 | orchestrator | 2026-04-05 01:06:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:55.678786 | orchestrator | 2026-04-05 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:58.721893 | orchestrator | 2026-04-05 01:06:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:06:58.725207 | orchestrator | 2026-04-05 01:06:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:06:58.725273 | orchestrator | 2026-04-05 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:01.777318 | orchestrator | 2026-04-05 01:07:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:01.781084 | orchestrator | 2026-04-05 01:07:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:01.781159 | orchestrator | 2026-04-05 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:04.830619 | orchestrator | 2026-04-05 01:07:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:04.832744 | orchestrator | 2026-04-05 01:07:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:04.832813 | orchestrator | 2026-04-05 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:07.875621 | orchestrator | 2026-04-05 01:07:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:07.878144 | orchestrator | 2026-04-05 01:07:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:07.878256 | orchestrator | 2026-04-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:10.931670 | orchestrator | 2026-04-05 01:07:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:10.934897 | orchestrator | 2026-04-05 01:07:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:10.935017 | orchestrator | 2026-04-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:13.979673 | orchestrator | 2026-04-05 01:07:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:13.980488 | orchestrator | 2026-04-05 01:07:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:13.980880 | orchestrator | 2026-04-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:17.021937 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:17.024924 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:17.024986 | orchestrator | 2026-04-05 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:20.073362 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:20.075410 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:20.075471 | orchestrator | 2026-04-05 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:23.123575 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:23.124947 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:23.124993 | orchestrator | 2026-04-05 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:26.161346 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:26.163435 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:26.163478 | orchestrator | 2026-04-05 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:29.213976 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:29.216415 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:29.216528 | orchestrator | 2026-04-05 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:32.258945 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:32.259429 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:32.259463 | orchestrator | 2026-04-05 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:35.305565 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:35.307655 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:35.307702 | orchestrator | 2026-04-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:38.354560 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:38.355372 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:38.355604 | orchestrator | 2026-04-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:41.396973 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:41.400322 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:41.400368 | orchestrator | 2026-04-05 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:44.448092 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:44.449801 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:44.449836 | orchestrator | 2026-04-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:47.496994 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:47.499209 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:47.499343 | orchestrator | 2026-04-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:50.544736 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:50.546808 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:50.546870 | orchestrator | 2026-04-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:53.585977 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:53.588248 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:53.588343 | orchestrator | 2026-04-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:56.633018 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:56.633908 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:56.634237 | orchestrator | 2026-04-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:59.685774 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:07:59.687887 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:07:59.687969 | orchestrator | 2026-04-05 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:02.738588 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:02.740984 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:02.741086 | orchestrator | 2026-04-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:05.787514 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:05.789385 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:05.789442 | orchestrator | 2026-04-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:08.839217 | orchestrator | 2026-04-05 01:08:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:08.840589 | orchestrator | 2026-04-05 01:08:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:08.840637 | orchestrator | 2026-04-05 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:11.888656 | orchestrator | 2026-04-05 01:08:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:11.891224 | orchestrator | 2026-04-05 01:08:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:11.891308 | orchestrator | 2026-04-05 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:14.941690 | orchestrator | 2026-04-05 01:08:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:14.943842 | orchestrator | 2026-04-05 01:08:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:14.943906 | orchestrator | 2026-04-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:17.990408 | orchestrator | 2026-04-05 01:08:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:17.991721 | orchestrator | 2026-04-05 01:08:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:17.991750 | orchestrator | 2026-04-05 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:21.039278 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:21.041548 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:21.041609 | orchestrator | 2026-04-05 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:24.081993 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:24.084898 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:24.085055 | orchestrator | 2026-04-05 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:27.137576 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:27.139573 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:27.139676 | orchestrator | 2026-04-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:30.188193 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:30.190417 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:30.190460 | orchestrator | 2026-04-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:33.239219 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:33.240644 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:33.240768 | orchestrator | 2026-04-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:36.285478 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:36.287733 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:36.287807 | orchestrator | 2026-04-05 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:39.343626 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:39.346923 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:39.347011 | orchestrator | 2026-04-05 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:42.397478 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:42.401180 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:42.401243 | orchestrator | 2026-04-05 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:45.451557 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:45.453632 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:45.453902 | orchestrator | 2026-04-05 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:48.501166 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:48.507039 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:48.507519 | orchestrator | 2026-04-05 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:51.551949 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:51.555125 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:51.555210 | orchestrator | 2026-04-05 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:54.602931 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:54.604102 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:54.604234 | orchestrator | 2026-04-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:57.650794 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:08:57.651746 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:08:57.651758 | orchestrator | 2026-04-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:00.694554 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:00.697216 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:00.697702 | orchestrator | 2026-04-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:03.743683 | orchestrator | 2026-04-05 01:09:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:03.745331 | orchestrator | 2026-04-05 01:09:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:03.745415 | orchestrator | 2026-04-05 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:06.789242 | orchestrator | 2026-04-05 01:09:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:06.790076 | orchestrator | 2026-04-05 01:09:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:06.790111 | orchestrator | 2026-04-05 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:09.843963 | orchestrator | 2026-04-05 01:09:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:09.845690 | orchestrator | 2026-04-05 01:09:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:09.846128 | orchestrator | 2026-04-05 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:12.888092 | orchestrator | 2026-04-05 01:09:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:12.889715 | orchestrator | 2026-04-05 01:09:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:12.889755 | orchestrator | 2026-04-05 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:15.935867 | orchestrator | 2026-04-05 01:09:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:15.938012 | orchestrator | 2026-04-05 01:09:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:15.938155 | orchestrator | 2026-04-05 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:18.985744 | orchestrator | 2026-04-05 01:09:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:18.987564 | orchestrator | 2026-04-05 01:09:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:18.987703 | orchestrator | 2026-04-05 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:22.037837 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:22.040343 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:22.040446 | orchestrator | 2026-04-05 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:25.095114 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:25.097095 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:25.097136 | orchestrator | 2026-04-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:28.146820 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:28.151318 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:28.151453 | orchestrator | 2026-04-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:31.205763 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:31.209915 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:31.210007 | orchestrator | 2026-04-05 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:34.258661 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:34.259933 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:34.259988 | orchestrator | 2026-04-05 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:37.308157 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:37.310345 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:37.310608 | orchestrator | 2026-04-05 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:40.363270 | orchestrator | 2026-04-05 01:09:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:40.364003 | orchestrator | 2026-04-05 01:09:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:40.364043 | orchestrator | 2026-04-05 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:43.418099 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:43.420807 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:43.421620 | orchestrator | 2026-04-05 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:46.469489 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:46.471930 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:46.472026 | orchestrator | 2026-04-05 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:49.518505 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:49.521058 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:49.521123 | orchestrator | 2026-04-05 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:52.567357 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:52.569450 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:52.569500 | orchestrator | 2026-04-05 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:55.624500 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:55.625596 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:55.625653 | orchestrator | 2026-04-05 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:58.676196 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:09:58.677711 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:09:58.677756 | orchestrator | 2026-04-05 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:01.724686 | orchestrator | 2026-04-05 01:10:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:01.725721 | orchestrator | 2026-04-05 01:10:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:01.725800 | orchestrator | 2026-04-05 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:04.777366 | orchestrator | 2026-04-05 01:10:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:04.779741 | orchestrator | 2026-04-05 01:10:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:04.779812 | orchestrator | 2026-04-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:07.824137 | orchestrator | 2026-04-05 01:10:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:07.824823 | orchestrator | 2026-04-05 01:10:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:07.824850 | orchestrator | 2026-04-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:10.873722 | orchestrator | 2026-04-05 01:10:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:10.875106 | orchestrator | 2026-04-05 01:10:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:10.875162 | orchestrator | 2026-04-05 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:13.922494 | orchestrator | 2026-04-05 01:10:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:13.924030 | orchestrator | 2026-04-05 01:10:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:13.924061 | orchestrator | 2026-04-05 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:16.969925 | orchestrator | 2026-04-05 01:10:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:16.972162 | orchestrator | 2026-04-05 01:10:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:16.972251 | orchestrator | 2026-04-05 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:20.024815 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:20.027567 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:20.027606 | orchestrator | 2026-04-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:23.079755 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:23.080804 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:23.080937 | orchestrator | 2026-04-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:26.137349 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:26.138548 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:26.138633 | orchestrator | 2026-04-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:29.189147 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:29.191304 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:29.191472 | orchestrator | 2026-04-05 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:32.247854 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:32.250317 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:32.250377 | orchestrator | 2026-04-05 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:35.297791 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:35.299701 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:35.299783 | orchestrator | 2026-04-05 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:38.349142 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:38.351892 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:38.351943 | orchestrator | 2026-04-05 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:41.393472 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:41.394206 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:41.394241 | orchestrator | 2026-04-05 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:44.442587 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:44.443230 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:44.443248 | orchestrator | 2026-04-05 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:47.487102 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:47.489975 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:47.490102 | orchestrator | 2026-04-05 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:50.535283 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:50.541243 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:50.541326 | orchestrator | 2026-04-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:53.585993 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:53.588163 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:53.588210 | orchestrator | 2026-04-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:56.644585 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:56.648871 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:56.648952 | orchestrator | 2026-04-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:59.697070 | orchestrator | 2026-04-05 01:10:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:10:59.698984 | orchestrator | 2026-04-05 01:10:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:10:59.699058 | orchestrator | 2026-04-05 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:02.737299 | orchestrator | 2026-04-05 01:11:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:02.740369 | orchestrator | 2026-04-05 01:11:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:02.740455 | orchestrator | 2026-04-05 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:05.791124 | orchestrator | 2026-04-05 01:11:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:05.793632 | orchestrator | 2026-04-05 01:11:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:05.793774 | orchestrator | 2026-04-05 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:08.845105 | orchestrator | 2026-04-05 01:11:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:08.847750 | orchestrator | 2026-04-05 01:11:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:08.847822 | orchestrator | 2026-04-05 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:11.896597 | orchestrator | 2026-04-05 01:11:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:11.898399 | orchestrator | 2026-04-05 01:11:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:11.898570 | orchestrator | 2026-04-05 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:14.950701 | orchestrator | 2026-04-05 01:11:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:14.952475 | orchestrator | 2026-04-05 01:11:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:14.952533 | orchestrator | 2026-04-05 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:18.003737 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:18.006609 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:18.006651 | orchestrator | 2026-04-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:21.050667 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:21.052663 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:21.052727 | orchestrator | 2026-04-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:24.099791 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:24.104263 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:24.104338 | orchestrator | 2026-04-05 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:27.162712 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:27.167030 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:27.167117 | orchestrator | 2026-04-05 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:30.217844 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:30.219642 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:30.219697 | orchestrator | 2026-04-05 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:33.270876 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:33.272343 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:33.272389 | orchestrator | 2026-04-05 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:36.322610 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:36.324674 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:36.324720 | orchestrator | 2026-04-05 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:39.371206 | orchestrator | 2026-04-05 01:11:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:39.372060 | orchestrator | 2026-04-05 01:11:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:39.372286 | orchestrator | 2026-04-05 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:42.412058 | orchestrator | 2026-04-05 01:11:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:42.414989 | orchestrator | 2026-04-05 01:11:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:42.415060 | orchestrator | 2026-04-05 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:45.468047 | orchestrator | 2026-04-05 01:11:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:45.470307 | orchestrator | 2026-04-05 01:11:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:45.470359 | orchestrator | 2026-04-05 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:48.510812 | orchestrator | 2026-04-05 01:11:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:48.512651 | orchestrator | 2026-04-05 01:11:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:48.512681 | orchestrator | 2026-04-05 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:51.553398 | orchestrator | 2026-04-05 01:11:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:51.556072 | orchestrator | 2026-04-05 01:11:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:51.556134 | orchestrator | 2026-04-05 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:54.603675 | orchestrator | 2026-04-05 01:11:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:54.606285 | orchestrator | 2026-04-05 01:11:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:54.606344 | orchestrator | 2026-04-05 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:57.648712 | orchestrator | 2026-04-05 01:11:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:11:57.650947 | orchestrator | 2026-04-05 01:11:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:11:57.651002 | orchestrator | 2026-04-05 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:00.708943 | orchestrator | 2026-04-05 01:12:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:00.710296 | orchestrator | 2026-04-05 01:12:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:00.710380 | orchestrator | 2026-04-05 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:03.762400 | orchestrator | 2026-04-05 01:12:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:03.764532 | orchestrator | 2026-04-05 01:12:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:03.764662 | orchestrator | 2026-04-05 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:06.814774 | orchestrator | 2026-04-05 01:12:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:06.816602 | orchestrator | 2026-04-05 01:12:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:06.816712 | orchestrator | 2026-04-05 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:09.860726 | orchestrator | 2026-04-05 01:12:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:09.861748 | orchestrator | 2026-04-05 01:12:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:09.861794 | orchestrator | 2026-04-05 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:12.913033 | orchestrator | 2026-04-05 01:12:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:12.914591 | orchestrator | 2026-04-05 01:12:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:12.914626 | orchestrator | 2026-04-05 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:15.963976 | orchestrator | 2026-04-05 01:12:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:15.965541 | orchestrator | 2026-04-05 01:12:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:15.965659 | orchestrator | 2026-04-05 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:19.009634 | orchestrator | 2026-04-05 01:12:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:19.012153 | orchestrator | 2026-04-05 01:12:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:19.012210 | orchestrator | 2026-04-05 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:22.060801 | orchestrator | 2026-04-05 01:12:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:22.062525 | orchestrator | 2026-04-05 01:12:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:22.062614 | orchestrator | 2026-04-05 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:25.100434 | orchestrator | 2026-04-05 01:12:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:25.100779 | orchestrator | 2026-04-05 01:12:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:25.100803 | orchestrator | 2026-04-05 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:28.152735 | orchestrator | 2026-04-05 01:12:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:28.155674 | orchestrator | 2026-04-05 01:12:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:28.155736 | orchestrator | 2026-04-05 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:31.208680 | orchestrator | 2026-04-05 01:12:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:31.210759 | orchestrator | 2026-04-05 01:12:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:31.210845 | orchestrator | 2026-04-05 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:34.263969 | orchestrator | 2026-04-05 01:12:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:34.265792 | orchestrator | 2026-04-05 01:12:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:34.265869 | orchestrator | 2026-04-05 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:37.320678 | orchestrator | 2026-04-05 01:12:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:37.323519 | orchestrator | 2026-04-05 01:12:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:37.323583 | orchestrator | 2026-04-05 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:40.371750 | orchestrator | 2026-04-05 01:12:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:40.373350 | orchestrator | 2026-04-05 01:12:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:40.373436 | orchestrator | 2026-04-05 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:43.424626 | orchestrator | 2026-04-05 01:12:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:43.425634 | orchestrator | 2026-04-05 01:12:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:43.425684 | orchestrator | 2026-04-05 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:46.473490 | orchestrator | 2026-04-05 01:12:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:46.474975 | orchestrator | 2026-04-05 01:12:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:46.475014 | orchestrator | 2026-04-05 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:49.528099 | orchestrator | 2026-04-05 01:12:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:49.529549 | orchestrator | 2026-04-05 01:12:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:49.529708 | orchestrator | 2026-04-05 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:52.572883 | orchestrator | 2026-04-05 01:12:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:52.574253 | orchestrator | 2026-04-05 01:12:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:52.574322 | orchestrator | 2026-04-05 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:55.624549 | orchestrator | 2026-04-05 01:12:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:55.628130 | orchestrator | 2026-04-05 01:12:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:55.628225 | orchestrator | 2026-04-05 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:12:58.685516 | orchestrator | 2026-04-05 01:12:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:12:58.688135 | orchestrator | 2026-04-05 01:12:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:12:58.688206 | orchestrator | 2026-04-05 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:01.740759 | orchestrator | 2026-04-05 01:13:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:01.742385 | orchestrator | 2026-04-05 01:13:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:01.742434 | orchestrator | 2026-04-05 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:04.792558 | orchestrator | 2026-04-05 01:13:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:04.794894 | orchestrator | 2026-04-05 01:13:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:04.794944 | orchestrator | 2026-04-05 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:07.846603 | orchestrator | 2026-04-05 01:13:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:07.848188 | orchestrator | 2026-04-05 01:13:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:07.848321 | orchestrator | 2026-04-05 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:10.904032 | orchestrator | 2026-04-05 01:13:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:10.906924 | orchestrator | 2026-04-05 01:13:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:10.907043 | orchestrator | 2026-04-05 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:13.960820 | orchestrator | 2026-04-05 01:13:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:13.965089 | orchestrator | 2026-04-05 01:13:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:13.965135 | orchestrator | 2026-04-05 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:17.020661 | orchestrator | 2026-04-05 01:13:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:17.022741 | orchestrator | 2026-04-05 01:13:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:17.022784 | orchestrator | 2026-04-05 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:20.078702 | orchestrator | 2026-04-05 01:13:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:20.080883 | orchestrator | 2026-04-05 01:13:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:20.080948 | orchestrator | 2026-04-05 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:23.128207 | orchestrator | 2026-04-05 01:13:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:23.133198 | orchestrator | 2026-04-05 01:13:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:23.133359 | orchestrator | 2026-04-05 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:26.187776 | orchestrator | 2026-04-05 01:13:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:26.190325 | orchestrator | 2026-04-05 01:13:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:26.190362 | orchestrator | 2026-04-05 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:29.239458 | orchestrator | 2026-04-05 01:13:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:29.242897 | orchestrator | 2026-04-05 01:13:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:29.243013 | orchestrator | 2026-04-05 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:32.301659 | orchestrator | 2026-04-05 01:13:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:32.304109 | orchestrator | 2026-04-05 01:13:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:32.304326 | orchestrator | 2026-04-05 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:35.358989 | orchestrator | 2026-04-05 01:13:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:35.360403 | orchestrator | 2026-04-05 01:13:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:35.360557 | orchestrator | 2026-04-05 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:38.413756 | orchestrator | 2026-04-05 01:13:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:38.416866 | orchestrator | 2026-04-05 01:13:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:38.416961 | orchestrator | 2026-04-05 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:41.462639 | orchestrator | 2026-04-05 01:13:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:41.464235 | orchestrator | 2026-04-05 01:13:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:41.464335 | orchestrator | 2026-04-05 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:44.523677 | orchestrator | 2026-04-05 01:13:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:44.525668 | orchestrator | 2026-04-05 01:13:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:44.525710 | orchestrator | 2026-04-05 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:47.572770 | orchestrator | 2026-04-05 01:13:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:47.574767 | orchestrator | 2026-04-05 01:13:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:47.574830 | orchestrator | 2026-04-05 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:50.625148 | orchestrator | 2026-04-05 01:13:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:50.628931 | orchestrator | 2026-04-05 01:13:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:50.629023 | orchestrator | 2026-04-05 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:53.684557 | orchestrator | 2026-04-05 01:13:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:53.687028 | orchestrator | 2026-04-05 01:13:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:53.687085 | orchestrator | 2026-04-05 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:56.731125 | orchestrator | 2026-04-05 01:13:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:56.732606 | orchestrator | 2026-04-05 01:13:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:56.732694 | orchestrator | 2026-04-05 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:59.786827 | orchestrator | 2026-04-05 01:13:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:13:59.787881 | orchestrator | 2026-04-05 01:13:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:13:59.787911 | orchestrator | 2026-04-05 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:02.836988 | orchestrator | 2026-04-05 01:14:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:02.839008 | orchestrator | 2026-04-05 01:14:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:02.839090 | orchestrator | 2026-04-05 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:05.885947 | orchestrator | 2026-04-05 01:14:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:05.887560 | orchestrator | 2026-04-05 01:14:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:05.887595 | orchestrator | 2026-04-05 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:08.938367 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:08.940645 | orchestrator | 2026-04-05 01:14:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:08.940707 | orchestrator | 2026-04-05 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:11.988652 | orchestrator | 2026-04-05 01:14:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:11.989551 | orchestrator | 2026-04-05 01:14:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:11.989596 | orchestrator | 2026-04-05 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:15.044201 | orchestrator | 2026-04-05 01:14:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:15.046056 | orchestrator | 2026-04-05 01:14:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:15.047319 | orchestrator | 2026-04-05 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:18.092130 | orchestrator | 2026-04-05 01:14:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:18.094645 | orchestrator | 2026-04-05 01:14:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:18.094753 | orchestrator | 2026-04-05 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:21.134238 | orchestrator | 2026-04-05 01:14:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:21.135713 | orchestrator | 2026-04-05 01:14:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:21.135966 | orchestrator | 2026-04-05 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:24.183699 | orchestrator | 2026-04-05 01:14:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:24.183854 | orchestrator | 2026-04-05 01:14:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:24.183948 | orchestrator | 2026-04-05 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:27.233428 | orchestrator | 2026-04-05 01:14:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:27.237053 | orchestrator | 2026-04-05 01:14:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:27.237121 | orchestrator | 2026-04-05 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:30.285550 | orchestrator | 2026-04-05 01:14:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:30.287105 | orchestrator | 2026-04-05 01:14:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:30.287211 | orchestrator | 2026-04-05 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:33.344023 | orchestrator | 2026-04-05 01:14:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:33.347413 | orchestrator | 2026-04-05 01:14:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:33.347592 | orchestrator | 2026-04-05 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:36.407113 | orchestrator | 2026-04-05 01:14:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:36.407984 | orchestrator | 2026-04-05 01:14:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:36.408017 | orchestrator | 2026-04-05 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:39.461611 | orchestrator | 2026-04-05 01:14:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:39.464198 | orchestrator | 2026-04-05 01:14:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:39.464292 | orchestrator | 2026-04-05 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:42.522190 | orchestrator | 2026-04-05 01:14:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:42.524398 | orchestrator | 2026-04-05 01:14:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:42.524439 | orchestrator | 2026-04-05 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:45.572287 | orchestrator | 2026-04-05 01:14:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:45.576621 | orchestrator | 2026-04-05 01:14:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:45.576695 | orchestrator | 2026-04-05 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:48.633104 | orchestrator | 2026-04-05 01:14:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:48.634869 | orchestrator | 2026-04-05 01:14:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:48.635216 | orchestrator | 2026-04-05 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:51.688154 | orchestrator | 2026-04-05 01:14:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:51.689573 | orchestrator | 2026-04-05 01:14:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:51.689981 | orchestrator | 2026-04-05 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:54.746658 | orchestrator | 2026-04-05 01:14:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:54.749723 | orchestrator | 2026-04-05 01:14:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:54.749774 | orchestrator | 2026-04-05 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:57.802122 | orchestrator | 2026-04-05 01:14:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:14:57.806193 | orchestrator | 2026-04-05 01:14:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:14:57.806878 | orchestrator | 2026-04-05 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:00.859115 | orchestrator | 2026-04-05 01:15:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:00.860900 | orchestrator | 2026-04-05 01:15:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:00.861116 | orchestrator | 2026-04-05 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:03.915230 | orchestrator | 2026-04-05 01:15:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:03.916376 | orchestrator | 2026-04-05 01:15:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:03.916779 | orchestrator | 2026-04-05 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:06.976233 | orchestrator | 2026-04-05 01:15:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:06.978187 | orchestrator | 2026-04-05 01:15:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:06.978686 | orchestrator | 2026-04-05 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:10.027995 | orchestrator | 2026-04-05 01:15:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:10.030993 | orchestrator | 2026-04-05 01:15:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:10.031050 | orchestrator | 2026-04-05 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:13.072081 | orchestrator | 2026-04-05 01:15:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:13.073449 | orchestrator | 2026-04-05 01:15:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:13.073500 | orchestrator | 2026-04-05 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:16.129433 | orchestrator | 2026-04-05 01:15:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:16.131410 | orchestrator | 2026-04-05 01:15:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:16.131484 | orchestrator | 2026-04-05 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:19.179719 | orchestrator | 2026-04-05 01:15:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:19.182755 | orchestrator | 2026-04-05 01:15:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:19.182898 | orchestrator | 2026-04-05 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:22.234106 | orchestrator | 2026-04-05 01:15:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:22.235163 | orchestrator | 2026-04-05 01:15:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:22.235312 | orchestrator | 2026-04-05 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:25.288146 | orchestrator | 2026-04-05 01:15:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:25.289931 | orchestrator | 2026-04-05 01:15:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:25.290078 | orchestrator | 2026-04-05 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:28.337527 | orchestrator | 2026-04-05 01:15:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:28.339859 | orchestrator | 2026-04-05 01:15:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:28.339962 | orchestrator | 2026-04-05 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:31.384005 | orchestrator | 2026-04-05 01:15:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:31.385505 | orchestrator | 2026-04-05 01:15:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:31.385711 | orchestrator | 2026-04-05 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:34.429093 | orchestrator | 2026-04-05 01:15:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:34.431980 | orchestrator | 2026-04-05 01:15:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:34.432041 | orchestrator | 2026-04-05 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:37.477332 | orchestrator | 2026-04-05 01:15:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:37.478560 | orchestrator | 2026-04-05 01:15:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:37.478629 | orchestrator | 2026-04-05 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:40.527473 | orchestrator | 2026-04-05 01:15:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:40.529023 | orchestrator | 2026-04-05 01:15:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:40.529074 | orchestrator | 2026-04-05 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:43.573691 | orchestrator | 2026-04-05 01:15:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:43.576444 | orchestrator | 2026-04-05 01:15:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:43.576526 | orchestrator | 2026-04-05 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:46.623890 | orchestrator | 2026-04-05 01:15:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:46.626149 | orchestrator | 2026-04-05 01:15:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:46.626188 | orchestrator | 2026-04-05 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:49.670931 | orchestrator | 2026-04-05 01:15:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:49.674241 | orchestrator | 2026-04-05 01:15:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:49.674293 | orchestrator | 2026-04-05 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:52.718531 | orchestrator | 2026-04-05 01:15:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:52.720176 | orchestrator | 2026-04-05 01:15:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:52.720245 | orchestrator | 2026-04-05 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:55.770386 | orchestrator | 2026-04-05 01:15:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:55.771968 | orchestrator | 2026-04-05 01:15:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:55.772128 | orchestrator | 2026-04-05 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:58.823055 | orchestrator | 2026-04-05 01:15:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:15:58.825154 | orchestrator | 2026-04-05 01:15:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:15:58.825211 | orchestrator | 2026-04-05 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:01.868351 | orchestrator | 2026-04-05 01:16:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:01.870375 | orchestrator | 2026-04-05 01:16:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:01.870512 | orchestrator | 2026-04-05 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:04.922869 | orchestrator | 2026-04-05 01:16:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:04.924258 | orchestrator | 2026-04-05 01:16:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:04.924286 | orchestrator | 2026-04-05 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:07.966126 | orchestrator | 2026-04-05 01:16:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:07.966878 | orchestrator | 2026-04-05 01:16:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:07.966954 | orchestrator | 2026-04-05 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:11.012912 | orchestrator | 2026-04-05 01:16:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:11.013952 | orchestrator | 2026-04-05 01:16:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:11.014076 | orchestrator | 2026-04-05 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:14.062415 | orchestrator | 2026-04-05 01:16:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:14.063975 | orchestrator | 2026-04-05 01:16:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:14.064032 | orchestrator | 2026-04-05 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:17.109452 | orchestrator | 2026-04-05 01:16:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:17.111940 | orchestrator | 2026-04-05 01:16:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:17.112068 | orchestrator | 2026-04-05 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:20.161837 | orchestrator | 2026-04-05 01:16:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:20.163396 | orchestrator | 2026-04-05 01:16:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:20.163445 | orchestrator | 2026-04-05 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:23.210605 | orchestrator | 2026-04-05 01:16:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:23.214483 | orchestrator | 2026-04-05 01:16:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:23.214554 | orchestrator | 2026-04-05 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:26.257455 | orchestrator | 2026-04-05 01:16:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:26.260704 | orchestrator | 2026-04-05 01:16:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:26.260731 | orchestrator | 2026-04-05 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:29.307142 | orchestrator | 2026-04-05 01:16:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:29.310987 | orchestrator | 2026-04-05 01:16:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:29.311079 | orchestrator | 2026-04-05 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:32.360313 | orchestrator | 2026-04-05 01:16:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:32.361899 | orchestrator | 2026-04-05 01:16:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:32.361944 | orchestrator | 2026-04-05 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:35.408241 | orchestrator | 2026-04-05 01:16:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:35.409984 | orchestrator | 2026-04-05 01:16:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:35.410096 | orchestrator | 2026-04-05 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:38.454954 | orchestrator | 2026-04-05 01:16:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:38.457050 | orchestrator | 2026-04-05 01:16:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:38.457099 | orchestrator | 2026-04-05 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:41.502962 | orchestrator | 2026-04-05 01:16:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:41.504015 | orchestrator | 2026-04-05 01:16:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:41.504045 | orchestrator | 2026-04-05 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:44.557217 | orchestrator | 2026-04-05 01:16:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:44.559723 | orchestrator | 2026-04-05 01:16:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:44.559789 | orchestrator | 2026-04-05 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:47.605007 | orchestrator | 2026-04-05 01:16:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:47.606548 | orchestrator | 2026-04-05 01:16:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:47.606680 | orchestrator | 2026-04-05 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:50.654226 | orchestrator | 2026-04-05 01:16:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:50.656967 | orchestrator | 2026-04-05 01:16:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:50.657011 | orchestrator | 2026-04-05 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:53.699554 | orchestrator | 2026-04-05 01:16:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:53.701393 | orchestrator | 2026-04-05 01:16:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:53.701433 | orchestrator | 2026-04-05 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:56.751507 | orchestrator | 2026-04-05 01:16:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:56.753171 | orchestrator | 2026-04-05 01:16:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:56.753716 | orchestrator | 2026-04-05 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:16:59.801514 | orchestrator | 2026-04-05 01:16:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:16:59.803019 | orchestrator | 2026-04-05 01:16:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:16:59.803057 | orchestrator | 2026-04-05 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:02.854703 | orchestrator | 2026-04-05 01:17:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:02.856800 | orchestrator | 2026-04-05 01:17:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:02.857113 | orchestrator | 2026-04-05 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:05.906916 | orchestrator | 2026-04-05 01:17:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:05.908925 | orchestrator | 2026-04-05 01:17:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:05.908973 | orchestrator | 2026-04-05 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:08.952911 | orchestrator | 2026-04-05 01:17:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:08.953472 | orchestrator | 2026-04-05 01:17:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:08.953505 | orchestrator | 2026-04-05 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:12.004102 | orchestrator | 2026-04-05 01:17:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:12.005010 | orchestrator | 2026-04-05 01:17:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:12.005260 | orchestrator | 2026-04-05 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:15.054570 | orchestrator | 2026-04-05 01:17:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:15.054874 | orchestrator | 2026-04-05 01:17:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:15.054903 | orchestrator | 2026-04-05 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:18.109726 | orchestrator | 2026-04-05 01:17:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:18.111159 | orchestrator | 2026-04-05 01:17:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:18.111294 | orchestrator | 2026-04-05 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:21.166721 | orchestrator | 2026-04-05 01:17:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:21.167958 | orchestrator | 2026-04-05 01:17:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:21.167994 | orchestrator | 2026-04-05 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:24.220730 | orchestrator | 2026-04-05 01:17:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:24.222408 | orchestrator | 2026-04-05 01:17:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:24.222466 | orchestrator | 2026-04-05 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:27.274012 | orchestrator | 2026-04-05 01:17:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:27.277737 | orchestrator | 2026-04-05 01:17:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:27.277836 | orchestrator | 2026-04-05 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:30.325813 | orchestrator | 2026-04-05 01:17:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:30.328235 | orchestrator | 2026-04-05 01:17:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:30.328339 | orchestrator | 2026-04-05 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:33.381907 | orchestrator | 2026-04-05 01:17:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:33.383647 | orchestrator | 2026-04-05 01:17:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:33.384009 | orchestrator | 2026-04-05 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:36.428448 | orchestrator | 2026-04-05 01:17:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:36.430209 | orchestrator | 2026-04-05 01:17:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:36.430302 | orchestrator | 2026-04-05 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:39.477439 | orchestrator | 2026-04-05 01:17:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:39.479022 | orchestrator | 2026-04-05 01:17:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:39.479064 | orchestrator | 2026-04-05 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:42.529921 | orchestrator | 2026-04-05 01:17:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:42.531276 | orchestrator | 2026-04-05 01:17:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:42.531324 | orchestrator | 2026-04-05 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:45.583657 | orchestrator | 2026-04-05 01:17:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:45.584945 | orchestrator | 2026-04-05 01:17:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:45.585121 | orchestrator | 2026-04-05 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:48.639372 | orchestrator | 2026-04-05 01:17:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:48.640472 | orchestrator | 2026-04-05 01:17:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:48.640509 | orchestrator | 2026-04-05 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:51.696819 | orchestrator | 2026-04-05 01:17:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:51.698942 | orchestrator | 2026-04-05 01:17:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:51.699012 | orchestrator | 2026-04-05 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:54.745370 | orchestrator | 2026-04-05 01:17:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:54.747248 | orchestrator | 2026-04-05 01:17:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:54.747796 | orchestrator | 2026-04-05 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:17:57.802292 | orchestrator | 2026-04-05 01:17:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:17:57.804620 | orchestrator | 2026-04-05 01:17:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:17:57.804686 | orchestrator | 2026-04-05 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:00.855934 | orchestrator | 2026-04-05 01:18:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:00.859203 | orchestrator | 2026-04-05 01:18:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:00.859284 | orchestrator | 2026-04-05 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:03.910600 | orchestrator | 2026-04-05 01:18:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:03.911661 | orchestrator | 2026-04-05 01:18:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:03.911765 | orchestrator | 2026-04-05 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:06.958316 | orchestrator | 2026-04-05 01:18:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:06.961431 | orchestrator | 2026-04-05 01:18:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:06.961499 | orchestrator | 2026-04-05 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:10.010779 | orchestrator | 2026-04-05 01:18:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:10.013692 | orchestrator | 2026-04-05 01:18:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:10.013903 | orchestrator | 2026-04-05 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:13.060387 | orchestrator | 2026-04-05 01:18:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:13.062421 | orchestrator | 2026-04-05 01:18:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:13.062630 | orchestrator | 2026-04-05 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:16.105115 | orchestrator | 2026-04-05 01:18:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:16.106918 | orchestrator | 2026-04-05 01:18:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:16.106983 | orchestrator | 2026-04-05 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:19.161370 | orchestrator | 2026-04-05 01:18:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:19.164644 | orchestrator | 2026-04-05 01:18:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:19.164767 | orchestrator | 2026-04-05 01:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:22.212795 | orchestrator | 2026-04-05 01:18:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:22.214254 | orchestrator | 2026-04-05 01:18:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:22.214393 | orchestrator | 2026-04-05 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:25.260524 | orchestrator | 2026-04-05 01:18:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:25.261872 | orchestrator | 2026-04-05 01:18:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:25.262217 | orchestrator | 2026-04-05 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:28.315228 | orchestrator | 2026-04-05 01:18:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:28.317407 | orchestrator | 2026-04-05 01:18:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:28.317452 | orchestrator | 2026-04-05 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:31.376022 | orchestrator | 2026-04-05 01:18:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:31.377175 | orchestrator | 2026-04-05 01:18:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:31.377555 | orchestrator | 2026-04-05 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:34.422399 | orchestrator | 2026-04-05 01:18:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:34.425412 | orchestrator | 2026-04-05 01:18:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:34.425470 | orchestrator | 2026-04-05 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:37.472946 | orchestrator | 2026-04-05 01:18:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:37.474588 | orchestrator | 2026-04-05 01:18:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:37.474642 | orchestrator | 2026-04-05 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:40.522256 | orchestrator | 2026-04-05 01:18:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:40.524561 | orchestrator | 2026-04-05 01:18:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:40.524626 | orchestrator | 2026-04-05 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:43.577117 | orchestrator | 2026-04-05 01:18:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:43.579168 | orchestrator | 2026-04-05 01:18:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:43.579232 | orchestrator | 2026-04-05 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:46.633138 | orchestrator | 2026-04-05 01:18:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:46.634226 | orchestrator | 2026-04-05 01:18:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:46.634308 | orchestrator | 2026-04-05 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:49.679225 | orchestrator | 2026-04-05 01:18:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:49.681060 | orchestrator | 2026-04-05 01:18:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:49.681119 | orchestrator | 2026-04-05 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:52.733803 | orchestrator | 2026-04-05 01:18:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:52.734995 | orchestrator | 2026-04-05 01:18:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:52.735132 | orchestrator | 2026-04-05 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:55.787930 | orchestrator | 2026-04-05 01:18:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:55.789326 | orchestrator | 2026-04-05 01:18:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:55.789476 | orchestrator | 2026-04-05 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:18:58.834425 | orchestrator | 2026-04-05 01:18:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:18:58.840250 | orchestrator | 2026-04-05 01:18:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:18:58.840351 | orchestrator | 2026-04-05 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:01.889062 | orchestrator | 2026-04-05 01:19:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:01.889985 | orchestrator | 2026-04-05 01:19:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:01.890092 | orchestrator | 2026-04-05 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:04.941418 | orchestrator | 2026-04-05 01:19:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:04.943547 | orchestrator | 2026-04-05 01:19:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:04.943642 | orchestrator | 2026-04-05 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:07.996650 | orchestrator | 2026-04-05 01:19:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:07.998120 | orchestrator | 2026-04-05 01:19:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:07.998203 | orchestrator | 2026-04-05 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:11.053033 | orchestrator | 2026-04-05 01:19:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:11.055043 | orchestrator | 2026-04-05 01:19:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:11.055099 | orchestrator | 2026-04-05 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:14.109285 | orchestrator | 2026-04-05 01:19:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:14.110868 | orchestrator | 2026-04-05 01:19:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:14.110965 | orchestrator | 2026-04-05 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:17.169748 | orchestrator | 2026-04-05 01:19:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:17.171092 | orchestrator | 2026-04-05 01:19:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:17.171168 | orchestrator | 2026-04-05 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:20.225954 | orchestrator | 2026-04-05 01:19:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:20.228065 | orchestrator | 2026-04-05 01:19:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:20.228147 | orchestrator | 2026-04-05 01:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:23.281863 | orchestrator | 2026-04-05 01:19:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:23.285181 | orchestrator | 2026-04-05 01:19:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:23.285256 | orchestrator | 2026-04-05 01:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:26.334287 | orchestrator | 2026-04-05 01:19:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:26.335756 | orchestrator | 2026-04-05 01:19:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:26.335852 | orchestrator | 2026-04-05 01:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:29.383129 | orchestrator | 2026-04-05 01:19:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:29.384099 | orchestrator | 2026-04-05 01:19:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:29.384206 | orchestrator | 2026-04-05 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:32.428454 | orchestrator | 2026-04-05 01:19:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:32.430876 | orchestrator | 2026-04-05 01:19:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:32.430957 | orchestrator | 2026-04-05 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:35.483430 | orchestrator | 2026-04-05 01:19:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:35.483965 | orchestrator | 2026-04-05 01:19:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:35.484131 | orchestrator | 2026-04-05 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:38.539775 | orchestrator | 2026-04-05 01:19:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:38.541242 | orchestrator | 2026-04-05 01:19:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:38.541295 | orchestrator | 2026-04-05 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:41.591902 | orchestrator | 2026-04-05 01:19:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:41.594293 | orchestrator | 2026-04-05 01:19:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:41.594423 | orchestrator | 2026-04-05 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:44.647559 | orchestrator | 2026-04-05 01:19:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:44.650647 | orchestrator | 2026-04-05 01:19:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:44.650745 | orchestrator | 2026-04-05 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:47.704988 | orchestrator | 2026-04-05 01:19:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:47.707984 | orchestrator | 2026-04-05 01:19:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:47.708030 | orchestrator | 2026-04-05 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:50.761545 | orchestrator | 2026-04-05 01:19:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:50.763767 | orchestrator | 2026-04-05 01:19:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:50.763875 | orchestrator | 2026-04-05 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:53.812876 | orchestrator | 2026-04-05 01:19:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:53.814427 | orchestrator | 2026-04-05 01:19:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:53.814466 | orchestrator | 2026-04-05 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:56.862374 | orchestrator | 2026-04-05 01:19:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:56.863763 | orchestrator | 2026-04-05 01:19:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:56.863909 | orchestrator | 2026-04-05 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:19:59.916186 | orchestrator | 2026-04-05 01:19:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:19:59.917872 | orchestrator | 2026-04-05 01:19:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:19:59.918005 | orchestrator | 2026-04-05 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:02.958487 | orchestrator | 2026-04-05 01:20:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:02.959407 | orchestrator | 2026-04-05 01:20:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:02.959448 | orchestrator | 2026-04-05 01:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:06.016787 | orchestrator | 2026-04-05 01:20:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:06.018903 | orchestrator | 2026-04-05 01:20:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:06.019616 | orchestrator | 2026-04-05 01:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:09.070994 | orchestrator | 2026-04-05 01:20:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:09.073253 | orchestrator | 2026-04-05 01:20:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:09.073334 | orchestrator | 2026-04-05 01:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:12.127381 | orchestrator | 2026-04-05 01:20:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:12.129890 | orchestrator | 2026-04-05 01:20:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:12.129931 | orchestrator | 2026-04-05 01:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:15.174256 | orchestrator | 2026-04-05 01:20:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:15.176620 | orchestrator | 2026-04-05 01:20:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:15.176668 | orchestrator | 2026-04-05 01:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:18.230188 | orchestrator | 2026-04-05 01:20:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:18.231561 | orchestrator | 2026-04-05 01:20:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:18.231622 | orchestrator | 2026-04-05 01:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:21.286795 | orchestrator | 2026-04-05 01:20:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:21.287333 | orchestrator | 2026-04-05 01:20:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:21.287361 | orchestrator | 2026-04-05 01:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:24.343172 | orchestrator | 2026-04-05 01:20:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:24.344496 | orchestrator | 2026-04-05 01:20:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:24.344551 | orchestrator | 2026-04-05 01:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:27.395348 | orchestrator | 2026-04-05 01:20:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:27.396098 | orchestrator | 2026-04-05 01:20:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:27.396129 | orchestrator | 2026-04-05 01:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:30.450398 | orchestrator | 2026-04-05 01:20:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:30.452068 | orchestrator | 2026-04-05 01:20:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:30.452126 | orchestrator | 2026-04-05 01:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:33.503770 | orchestrator | 2026-04-05 01:20:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:33.506177 | orchestrator | 2026-04-05 01:20:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:33.506264 | orchestrator | 2026-04-05 01:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:36.551898 | orchestrator | 2026-04-05 01:20:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:36.552676 | orchestrator | 2026-04-05 01:20:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:36.552710 | orchestrator | 2026-04-05 01:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:39.611322 | orchestrator | 2026-04-05 01:20:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:39.612959 | orchestrator | 2026-04-05 01:20:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:39.613032 | orchestrator | 2026-04-05 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:42.659616 | orchestrator | 2026-04-05 01:20:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:42.660585 | orchestrator | 2026-04-05 01:20:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:42.660627 | orchestrator | 2026-04-05 01:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:45.710215 | orchestrator | 2026-04-05 01:20:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:45.712548 | orchestrator | 2026-04-05 01:20:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:45.712599 | orchestrator | 2026-04-05 01:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:48.757617 | orchestrator | 2026-04-05 01:20:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:48.759370 | orchestrator | 2026-04-05 01:20:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:48.759431 | orchestrator | 2026-04-05 01:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:51.819108 | orchestrator | 2026-04-05 01:20:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:51.820680 | orchestrator | 2026-04-05 01:20:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:51.820938 | orchestrator | 2026-04-05 01:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:54.875161 | orchestrator | 2026-04-05 01:20:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:54.876292 | orchestrator | 2026-04-05 01:20:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:54.876338 | orchestrator | 2026-04-05 01:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:20:57.929017 | orchestrator | 2026-04-05 01:20:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:20:57.931456 | orchestrator | 2026-04-05 01:20:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:20:57.931536 | orchestrator | 2026-04-05 01:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:00.984945 | orchestrator | 2026-04-05 01:21:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:00.986473 | orchestrator | 2026-04-05 01:21:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:00.986516 | orchestrator | 2026-04-05 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:04.041937 | orchestrator | 2026-04-05 01:21:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:04.044158 | orchestrator | 2026-04-05 01:21:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:04.044240 | orchestrator | 2026-04-05 01:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:07.093463 | orchestrator | 2026-04-05 01:21:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:07.096122 | orchestrator | 2026-04-05 01:21:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:07.096287 | orchestrator | 2026-04-05 01:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:10.146109 | orchestrator | 2026-04-05 01:21:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:10.147375 | orchestrator | 2026-04-05 01:21:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:10.147409 | orchestrator | 2026-04-05 01:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:13.188078 | orchestrator | 2026-04-05 01:21:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:13.188978 | orchestrator | 2026-04-05 01:21:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:13.189020 | orchestrator | 2026-04-05 01:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:16.239496 | orchestrator | 2026-04-05 01:21:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:16.242313 | orchestrator | 2026-04-05 01:21:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:16.243589 | orchestrator | 2026-04-05 01:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:19.293933 | orchestrator | 2026-04-05 01:21:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:19.296326 | orchestrator | 2026-04-05 01:21:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:19.296378 | orchestrator | 2026-04-05 01:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:22.353224 | orchestrator | 2026-04-05 01:21:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:22.354925 | orchestrator | 2026-04-05 01:21:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:22.355000 | orchestrator | 2026-04-05 01:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:25.403225 | orchestrator | 2026-04-05 01:21:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:25.403349 | orchestrator | 2026-04-05 01:21:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:25.403367 | orchestrator | 2026-04-05 01:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:28.451793 | orchestrator | 2026-04-05 01:21:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:28.453888 | orchestrator | 2026-04-05 01:21:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:28.453960 | orchestrator | 2026-04-05 01:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:31.498701 | orchestrator | 2026-04-05 01:21:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:31.500989 | orchestrator | 2026-04-05 01:21:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:31.501037 | orchestrator | 2026-04-05 01:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:34.548472 | orchestrator | 2026-04-05 01:21:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:34.548590 | orchestrator | 2026-04-05 01:21:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:34.548607 | orchestrator | 2026-04-05 01:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:37.605511 | orchestrator | 2026-04-05 01:21:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:37.607747 | orchestrator | 2026-04-05 01:21:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:37.607815 | orchestrator | 2026-04-05 01:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:40.656734 | orchestrator | 2026-04-05 01:21:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:40.658454 | orchestrator | 2026-04-05 01:21:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:40.658496 | orchestrator | 2026-04-05 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:43.713190 | orchestrator | 2026-04-05 01:21:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:43.715452 | orchestrator | 2026-04-05 01:21:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:43.715554 | orchestrator | 2026-04-05 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:46.768312 | orchestrator | 2026-04-05 01:21:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:46.769767 | orchestrator | 2026-04-05 01:21:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:46.769814 | orchestrator | 2026-04-05 01:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:49.818766 | orchestrator | 2026-04-05 01:21:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:49.819090 | orchestrator | 2026-04-05 01:21:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:49.819545 | orchestrator | 2026-04-05 01:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:52.860450 | orchestrator | 2026-04-05 01:21:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:52.860554 | orchestrator | 2026-04-05 01:21:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:52.860569 | orchestrator | 2026-04-05 01:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:55.898425 | orchestrator | 2026-04-05 01:21:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:21:55.900966 | orchestrator | 2026-04-05 01:21:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:21:55.901032 | orchestrator | 2026-04-05 01:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:21:58.946405 | orchestrator | 2026-04-05 01:21:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:23:59.054425 | orchestrator | 2026-04-05 01:23:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:23:59.054538 | orchestrator | 2026-04-05 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:02.098256 | orchestrator | 2026-04-05 01:24:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:02.099199 | orchestrator | 2026-04-05 01:24:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:02.099267 | orchestrator | 2026-04-05 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:05.154984 | orchestrator | 2026-04-05 01:24:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:05.156625 | orchestrator | 2026-04-05 01:24:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:05.156777 | orchestrator | 2026-04-05 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:08.206666 | orchestrator | 2026-04-05 01:24:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:08.209634 | orchestrator | 2026-04-05 01:24:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:08.209697 | orchestrator | 2026-04-05 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:11.259803 | orchestrator | 2026-04-05 01:24:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:11.261487 | orchestrator | 2026-04-05 01:24:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:11.261532 | orchestrator | 2026-04-05 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:14.316851 | orchestrator | 2026-04-05 01:24:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:14.318904 | orchestrator | 2026-04-05 01:24:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:14.318984 | orchestrator | 2026-04-05 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:17.365005 | orchestrator | 2026-04-05 01:24:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:17.366618 | orchestrator | 2026-04-05 01:24:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:17.366653 | orchestrator | 2026-04-05 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:20.417713 | orchestrator | 2026-04-05 01:24:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:20.419248 | orchestrator | 2026-04-05 01:24:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:20.419351 | orchestrator | 2026-04-05 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:23.469274 | orchestrator | 2026-04-05 01:24:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:23.471047 | orchestrator | 2026-04-05 01:24:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:23.471117 | orchestrator | 2026-04-05 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:26.520364 | orchestrator | 2026-04-05 01:24:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:26.521711 | orchestrator | 2026-04-05 01:24:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:26.521738 | orchestrator | 2026-04-05 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:29.570778 | orchestrator | 2026-04-05 01:24:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:29.571702 | orchestrator | 2026-04-05 01:24:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:29.571742 | orchestrator | 2026-04-05 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:32.617352 | orchestrator | 2026-04-05 01:24:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:32.619517 | orchestrator | 2026-04-05 01:24:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:32.619535 | orchestrator | 2026-04-05 01:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:35.672038 | orchestrator | 2026-04-05 01:24:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:35.675033 | orchestrator | 2026-04-05 01:24:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:35.675097 | orchestrator | 2026-04-05 01:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:38.724645 | orchestrator | 2026-04-05 01:24:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:38.727777 | orchestrator | 2026-04-05 01:24:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:38.727867 | orchestrator | 2026-04-05 01:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:41.773220 | orchestrator | 2026-04-05 01:24:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:41.775017 | orchestrator | 2026-04-05 01:24:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:41.775079 | orchestrator | 2026-04-05 01:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:44.822742 | orchestrator | 2026-04-05 01:24:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:44.824727 | orchestrator | 2026-04-05 01:24:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:44.824771 | orchestrator | 2026-04-05 01:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:47.871110 | orchestrator | 2026-04-05 01:24:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:47.872827 | orchestrator | 2026-04-05 01:24:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:47.872913 | orchestrator | 2026-04-05 01:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:50.930646 | orchestrator | 2026-04-05 01:24:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:50.932164 | orchestrator | 2026-04-05 01:24:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:50.932198 | orchestrator | 2026-04-05 01:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:53.988135 | orchestrator | 2026-04-05 01:24:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:53.991177 | orchestrator | 2026-04-05 01:24:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:53.991237 | orchestrator | 2026-04-05 01:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:24:57.049715 | orchestrator | 2026-04-05 01:24:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:24:57.053737 | orchestrator | 2026-04-05 01:24:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:24:57.053815 | orchestrator | 2026-04-05 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:00.104048 | orchestrator | 2026-04-05 01:25:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:00.106391 | orchestrator | 2026-04-05 01:25:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:00.106472 | orchestrator | 2026-04-05 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:03.152540 | orchestrator | 2026-04-05 01:25:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:03.155838 | orchestrator | 2026-04-05 01:25:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:03.155925 | orchestrator | 2026-04-05 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:06.206556 | orchestrator | 2026-04-05 01:25:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:06.208074 | orchestrator | 2026-04-05 01:25:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:06.208117 | orchestrator | 2026-04-05 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:09.258964 | orchestrator | 2026-04-05 01:25:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:09.261946 | orchestrator | 2026-04-05 01:25:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:09.262078 | orchestrator | 2026-04-05 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:12.318344 | orchestrator | 2026-04-05 01:25:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:12.319827 | orchestrator | 2026-04-05 01:25:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:12.319915 | orchestrator | 2026-04-05 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:15.367882 | orchestrator | 2026-04-05 01:25:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:15.369640 | orchestrator | 2026-04-05 01:25:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:15.369680 | orchestrator | 2026-04-05 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:18.414681 | orchestrator | 2026-04-05 01:25:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:18.416709 | orchestrator | 2026-04-05 01:25:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:18.416768 | orchestrator | 2026-04-05 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:21.471935 | orchestrator | 2026-04-05 01:25:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:21.473888 | orchestrator | 2026-04-05 01:25:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:21.473966 | orchestrator | 2026-04-05 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:24.520267 | orchestrator | 2026-04-05 01:25:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:24.521973 | orchestrator | 2026-04-05 01:25:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:24.522105 | orchestrator | 2026-04-05 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:27.569192 | orchestrator | 2026-04-05 01:25:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:27.570361 | orchestrator | 2026-04-05 01:25:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:27.570553 | orchestrator | 2026-04-05 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:30.614822 | orchestrator | 2026-04-05 01:25:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:30.616391 | orchestrator | 2026-04-05 01:25:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:30.616452 | orchestrator | 2026-04-05 01:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:33.664490 | orchestrator | 2026-04-05 01:25:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:33.667192 | orchestrator | 2026-04-05 01:25:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:33.667247 | orchestrator | 2026-04-05 01:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:36.720539 | orchestrator | 2026-04-05 01:25:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:36.723171 | orchestrator | 2026-04-05 01:25:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:36.723241 | orchestrator | 2026-04-05 01:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:39.778167 | orchestrator | 2026-04-05 01:25:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:39.779383 | orchestrator | 2026-04-05 01:25:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:39.779442 | orchestrator | 2026-04-05 01:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:42.833761 | orchestrator | 2026-04-05 01:25:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:42.835700 | orchestrator | 2026-04-05 01:25:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:42.835791 | orchestrator | 2026-04-05 01:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:45.883223 | orchestrator | 2026-04-05 01:25:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:45.885573 | orchestrator | 2026-04-05 01:25:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:45.885635 | orchestrator | 2026-04-05 01:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:48.935130 | orchestrator | 2026-04-05 01:25:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:48.938560 | orchestrator | 2026-04-05 01:25:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:48.938642 | orchestrator | 2026-04-05 01:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:51.989295 | orchestrator | 2026-04-05 01:25:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:51.990599 | orchestrator | 2026-04-05 01:25:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:51.990695 | orchestrator | 2026-04-05 01:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:55.046263 | orchestrator | 2026-04-05 01:25:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:55.047726 | orchestrator | 2026-04-05 01:25:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:55.047778 | orchestrator | 2026-04-05 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:25:58.099130 | orchestrator | 2026-04-05 01:25:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:25:58.100618 | orchestrator | 2026-04-05 01:25:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:25:58.100662 | orchestrator | 2026-04-05 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:01.142219 | orchestrator | 2026-04-05 01:26:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:01.143203 | orchestrator | 2026-04-05 01:26:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:01.143239 | orchestrator | 2026-04-05 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:04.188295 | orchestrator | 2026-04-05 01:26:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:04.190841 | orchestrator | 2026-04-05 01:26:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:04.190904 | orchestrator | 2026-04-05 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:07.237863 | orchestrator | 2026-04-05 01:26:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:07.240207 | orchestrator | 2026-04-05 01:26:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:07.240282 | orchestrator | 2026-04-05 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:10.291442 | orchestrator | 2026-04-05 01:26:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:10.293160 | orchestrator | 2026-04-05 01:26:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:10.293259 | orchestrator | 2026-04-05 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:13.343530 | orchestrator | 2026-04-05 01:26:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:13.344142 | orchestrator | 2026-04-05 01:26:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:13.344173 | orchestrator | 2026-04-05 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:16.389661 | orchestrator | 2026-04-05 01:26:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:16.391358 | orchestrator | 2026-04-05 01:26:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:16.391479 | orchestrator | 2026-04-05 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:19.442604 | orchestrator | 2026-04-05 01:26:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:19.443938 | orchestrator | 2026-04-05 01:26:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:19.443981 | orchestrator | 2026-04-05 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:22.492272 | orchestrator | 2026-04-05 01:26:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:22.494859 | orchestrator | 2026-04-05 01:26:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:22.494929 | orchestrator | 2026-04-05 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:25.550397 | orchestrator | 2026-04-05 01:26:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:25.551914 | orchestrator | 2026-04-05 01:26:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:25.551957 | orchestrator | 2026-04-05 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:28.599339 | orchestrator | 2026-04-05 01:26:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:28.601093 | orchestrator | 2026-04-05 01:26:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:28.601184 | orchestrator | 2026-04-05 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:31.649358 | orchestrator | 2026-04-05 01:26:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:31.651128 | orchestrator | 2026-04-05 01:26:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:31.651208 | orchestrator | 2026-04-05 01:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:34.700562 | orchestrator | 2026-04-05 01:26:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:34.703350 | orchestrator | 2026-04-05 01:26:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:34.703423 | orchestrator | 2026-04-05 01:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:37.751226 | orchestrator | 2026-04-05 01:26:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:37.751374 | orchestrator | 2026-04-05 01:26:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:37.751391 | orchestrator | 2026-04-05 01:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:40.799559 | orchestrator | 2026-04-05 01:26:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:40.801185 | orchestrator | 2026-04-05 01:26:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:40.801230 | orchestrator | 2026-04-05 01:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:43.846360 | orchestrator | 2026-04-05 01:26:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:43.847770 | orchestrator | 2026-04-05 01:26:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:43.847887 | orchestrator | 2026-04-05 01:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:46.902636 | orchestrator | 2026-04-05 01:26:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:46.904342 | orchestrator | 2026-04-05 01:26:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:46.904376 | orchestrator | 2026-04-05 01:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:49.950356 | orchestrator | 2026-04-05 01:26:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:49.950772 | orchestrator | 2026-04-05 01:26:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:49.951437 | orchestrator | 2026-04-05 01:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:52.993552 | orchestrator | 2026-04-05 01:26:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:52.994727 | orchestrator | 2026-04-05 01:26:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:52.994816 | orchestrator | 2026-04-05 01:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:56.037928 | orchestrator | 2026-04-05 01:26:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:56.041387 | orchestrator | 2026-04-05 01:26:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:56.041453 | orchestrator | 2026-04-05 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:26:59.086490 | orchestrator | 2026-04-05 01:26:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:26:59.088552 | orchestrator | 2026-04-05 01:26:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:26:59.088610 | orchestrator | 2026-04-05 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:02.139372 | orchestrator | 2026-04-05 01:27:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:02.142290 | orchestrator | 2026-04-05 01:27:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:02.142334 | orchestrator | 2026-04-05 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:05.185626 | orchestrator | 2026-04-05 01:27:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:05.186300 | orchestrator | 2026-04-05 01:27:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:05.186335 | orchestrator | 2026-04-05 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:08.228616 | orchestrator | 2026-04-05 01:27:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:08.229826 | orchestrator | 2026-04-05 01:27:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:08.229863 | orchestrator | 2026-04-05 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:11.276755 | orchestrator | 2026-04-05 01:27:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:11.278953 | orchestrator | 2026-04-05 01:27:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:11.279000 | orchestrator | 2026-04-05 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:14.327841 | orchestrator | 2026-04-05 01:27:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:14.328442 | orchestrator | 2026-04-05 01:27:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:14.328479 | orchestrator | 2026-04-05 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:17.376521 | orchestrator | 2026-04-05 01:27:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:17.378986 | orchestrator | 2026-04-05 01:27:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:17.379236 | orchestrator | 2026-04-05 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:20.423360 | orchestrator | 2026-04-05 01:27:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:20.426224 | orchestrator | 2026-04-05 01:27:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:20.426288 | orchestrator | 2026-04-05 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:23.477850 | orchestrator | 2026-04-05 01:27:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:23.479644 | orchestrator | 2026-04-05 01:27:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:23.479698 | orchestrator | 2026-04-05 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:26.522613 | orchestrator | 2026-04-05 01:27:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:26.524230 | orchestrator | 2026-04-05 01:27:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:26.524286 | orchestrator | 2026-04-05 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:29.574958 | orchestrator | 2026-04-05 01:27:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:29.576439 | orchestrator | 2026-04-05 01:27:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:29.576502 | orchestrator | 2026-04-05 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:32.619796 | orchestrator | 2026-04-05 01:27:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:32.621670 | orchestrator | 2026-04-05 01:27:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:32.621702 | orchestrator | 2026-04-05 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:35.664705 | orchestrator | 2026-04-05 01:27:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:35.668330 | orchestrator | 2026-04-05 01:27:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:35.668394 | orchestrator | 2026-04-05 01:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:38.712339 | orchestrator | 2026-04-05 01:27:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:38.715146 | orchestrator | 2026-04-05 01:27:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:38.715361 | orchestrator | 2026-04-05 01:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:41.757533 | orchestrator | 2026-04-05 01:27:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:41.758611 | orchestrator | 2026-04-05 01:27:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:41.758999 | orchestrator | 2026-04-05 01:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:44.805942 | orchestrator | 2026-04-05 01:27:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:44.807731 | orchestrator | 2026-04-05 01:27:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:44.807780 | orchestrator | 2026-04-05 01:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:47.855288 | orchestrator | 2026-04-05 01:27:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:47.857942 | orchestrator | 2026-04-05 01:27:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:47.857982 | orchestrator | 2026-04-05 01:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:50.912211 | orchestrator | 2026-04-05 01:27:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:50.913343 | orchestrator | 2026-04-05 01:27:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:50.913475 | orchestrator | 2026-04-05 01:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:53.955660 | orchestrator | 2026-04-05 01:27:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:53.956399 | orchestrator | 2026-04-05 01:27:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:53.956482 | orchestrator | 2026-04-05 01:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:27:57.003822 | orchestrator | 2026-04-05 01:27:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:27:57.004356 | orchestrator | 2026-04-05 01:27:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:27:57.004634 | orchestrator | 2026-04-05 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:00.056054 | orchestrator | 2026-04-05 01:28:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:00.056770 | orchestrator | 2026-04-05 01:28:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:00.056797 | orchestrator | 2026-04-05 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:03.094974 | orchestrator | 2026-04-05 01:28:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:03.097335 | orchestrator | 2026-04-05 01:28:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:03.097446 | orchestrator | 2026-04-05 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:06.139587 | orchestrator | 2026-04-05 01:28:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:06.141831 | orchestrator | 2026-04-05 01:28:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:06.141977 | orchestrator | 2026-04-05 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:09.188447 | orchestrator | 2026-04-05 01:28:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:09.189806 | orchestrator | 2026-04-05 01:28:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:09.189846 | orchestrator | 2026-04-05 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:12.235110 | orchestrator | 2026-04-05 01:28:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:12.236608 | orchestrator | 2026-04-05 01:28:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:12.236706 | orchestrator | 2026-04-05 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:15.288155 | orchestrator | 2026-04-05 01:28:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:15.289615 | orchestrator | 2026-04-05 01:28:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:15.289687 | orchestrator | 2026-04-05 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:18.342291 | orchestrator | 2026-04-05 01:28:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:18.344435 | orchestrator | 2026-04-05 01:28:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:18.344654 | orchestrator | 2026-04-05 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:21.388382 | orchestrator | 2026-04-05 01:28:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:21.389860 | orchestrator | 2026-04-05 01:28:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:21.389900 | orchestrator | 2026-04-05 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:24.439538 | orchestrator | 2026-04-05 01:28:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:24.440940 | orchestrator | 2026-04-05 01:28:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:24.440993 | orchestrator | 2026-04-05 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:27.487596 | orchestrator | 2026-04-05 01:28:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:27.489452 | orchestrator | 2026-04-05 01:28:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:27.489536 | orchestrator | 2026-04-05 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:30.535179 | orchestrator | 2026-04-05 01:28:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:30.537856 | orchestrator | 2026-04-05 01:28:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:30.538358 | orchestrator | 2026-04-05 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:33.583292 | orchestrator | 2026-04-05 01:28:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:33.585060 | orchestrator | 2026-04-05 01:28:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:33.585110 | orchestrator | 2026-04-05 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:36.624780 | orchestrator | 2026-04-05 01:28:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:36.625787 | orchestrator | 2026-04-05 01:28:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:36.625819 | orchestrator | 2026-04-05 01:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:39.673851 | orchestrator | 2026-04-05 01:28:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:39.675821 | orchestrator | 2026-04-05 01:28:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:39.675859 | orchestrator | 2026-04-05 01:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:42.722449 | orchestrator | 2026-04-05 01:28:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:42.724548 | orchestrator | 2026-04-05 01:28:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:42.724596 | orchestrator | 2026-04-05 01:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:45.766908 | orchestrator | 2026-04-05 01:28:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:45.768789 | orchestrator | 2026-04-05 01:28:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:45.768821 | orchestrator | 2026-04-05 01:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:48.819942 | orchestrator | 2026-04-05 01:28:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:48.822547 | orchestrator | 2026-04-05 01:28:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:48.822608 | orchestrator | 2026-04-05 01:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:51.875968 | orchestrator | 2026-04-05 01:28:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:51.877523 | orchestrator | 2026-04-05 01:28:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:51.877585 | orchestrator | 2026-04-05 01:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:54.934421 | orchestrator | 2026-04-05 01:28:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:54.936996 | orchestrator | 2026-04-05 01:28:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:54.937116 | orchestrator | 2026-04-05 01:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:28:57.978632 | orchestrator | 2026-04-05 01:28:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:28:57.980081 | orchestrator | 2026-04-05 01:28:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:28:57.980200 | orchestrator | 2026-04-05 01:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:01.032270 | orchestrator | 2026-04-05 01:29:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:01.033553 | orchestrator | 2026-04-05 01:29:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:01.033595 | orchestrator | 2026-04-05 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:04.075350 | orchestrator | 2026-04-05 01:29:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:04.076740 | orchestrator | 2026-04-05 01:29:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:04.077088 | orchestrator | 2026-04-05 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:07.124244 | orchestrator | 2026-04-05 01:29:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:07.126619 | orchestrator | 2026-04-05 01:29:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:07.126688 | orchestrator | 2026-04-05 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:10.175875 | orchestrator | 2026-04-05 01:29:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:10.177315 | orchestrator | 2026-04-05 01:29:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:10.177370 | orchestrator | 2026-04-05 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:13.230350 | orchestrator | 2026-04-05 01:29:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:13.232864 | orchestrator | 2026-04-05 01:29:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:13.232919 | orchestrator | 2026-04-05 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:16.275154 | orchestrator | 2026-04-05 01:29:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:16.276596 | orchestrator | 2026-04-05 01:29:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:16.276655 | orchestrator | 2026-04-05 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:19.325564 | orchestrator | 2026-04-05 01:29:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:19.327954 | orchestrator | 2026-04-05 01:29:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:19.327995 | orchestrator | 2026-04-05 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:22.377789 | orchestrator | 2026-04-05 01:29:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:22.379966 | orchestrator | 2026-04-05 01:29:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:22.380019 | orchestrator | 2026-04-05 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:25.434011 | orchestrator | 2026-04-05 01:29:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:25.435249 | orchestrator | 2026-04-05 01:29:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:25.435271 | orchestrator | 2026-04-05 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:28.484746 | orchestrator | 2026-04-05 01:29:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:28.486923 | orchestrator | 2026-04-05 01:29:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:28.487070 | orchestrator | 2026-04-05 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:31.540307 | orchestrator | 2026-04-05 01:29:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:31.541877 | orchestrator | 2026-04-05 01:29:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:31.541943 | orchestrator | 2026-04-05 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:34.587452 | orchestrator | 2026-04-05 01:29:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:34.589472 | orchestrator | 2026-04-05 01:29:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:34.589521 | orchestrator | 2026-04-05 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:37.636517 | orchestrator | 2026-04-05 01:29:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:37.637788 | orchestrator | 2026-04-05 01:29:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:37.637895 | orchestrator | 2026-04-05 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:40.685403 | orchestrator | 2026-04-05 01:29:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:40.688430 | orchestrator | 2026-04-05 01:29:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:40.688494 | orchestrator | 2026-04-05 01:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:43.740398 | orchestrator | 2026-04-05 01:29:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:43.741707 | orchestrator | 2026-04-05 01:29:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:43.741744 | orchestrator | 2026-04-05 01:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:46.803062 | orchestrator | 2026-04-05 01:29:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:46.804308 | orchestrator | 2026-04-05 01:29:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:46.804549 | orchestrator | 2026-04-05 01:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:49.850986 | orchestrator | 2026-04-05 01:29:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:49.855018 | orchestrator | 2026-04-05 01:29:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:49.855085 | orchestrator | 2026-04-05 01:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:52.908912 | orchestrator | 2026-04-05 01:29:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:52.913648 | orchestrator | 2026-04-05 01:29:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:52.913734 | orchestrator | 2026-04-05 01:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:55.965620 | orchestrator | 2026-04-05 01:29:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:55.969400 | orchestrator | 2026-04-05 01:29:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:55.969487 | orchestrator | 2026-04-05 01:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:29:59.017348 | orchestrator | 2026-04-05 01:29:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:29:59.020368 | orchestrator | 2026-04-05 01:29:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:29:59.020451 | orchestrator | 2026-04-05 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:02.074383 | orchestrator | 2026-04-05 01:30:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:02.077285 | orchestrator | 2026-04-05 01:30:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:02.077369 | orchestrator | 2026-04-05 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:05.139783 | orchestrator | 2026-04-05 01:30:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:05.143755 | orchestrator | 2026-04-05 01:30:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:05.143827 | orchestrator | 2026-04-05 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:08.199762 | orchestrator | 2026-04-05 01:30:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:08.201697 | orchestrator | 2026-04-05 01:30:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:08.201904 | orchestrator | 2026-04-05 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:11.246662 | orchestrator | 2026-04-05 01:30:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:11.248418 | orchestrator | 2026-04-05 01:30:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:11.248490 | orchestrator | 2026-04-05 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:14.300783 | orchestrator | 2026-04-05 01:30:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:14.302521 | orchestrator | 2026-04-05 01:30:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:14.302567 | orchestrator | 2026-04-05 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:17.347192 | orchestrator | 2026-04-05 01:30:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:17.348869 | orchestrator | 2026-04-05 01:30:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:17.348908 | orchestrator | 2026-04-05 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:20.395744 | orchestrator | 2026-04-05 01:30:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:20.397786 | orchestrator | 2026-04-05 01:30:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:20.397832 | orchestrator | 2026-04-05 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:23.443964 | orchestrator | 2026-04-05 01:30:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:23.445900 | orchestrator | 2026-04-05 01:30:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:23.445960 | orchestrator | 2026-04-05 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:26.500868 | orchestrator | 2026-04-05 01:30:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:26.502189 | orchestrator | 2026-04-05 01:30:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:26.502284 | orchestrator | 2026-04-05 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:29.551057 | orchestrator | 2026-04-05 01:30:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:29.551851 | orchestrator | 2026-04-05 01:30:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:29.552109 | orchestrator | 2026-04-05 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:32.593839 | orchestrator | 2026-04-05 01:30:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:32.595855 | orchestrator | 2026-04-05 01:30:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:32.595916 | orchestrator | 2026-04-05 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:35.644132 | orchestrator | 2026-04-05 01:30:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:35.645523 | orchestrator | 2026-04-05 01:30:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:35.645573 | orchestrator | 2026-04-05 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:38.697925 | orchestrator | 2026-04-05 01:30:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:38.699391 | orchestrator | 2026-04-05 01:30:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:38.699430 | orchestrator | 2026-04-05 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:41.750465 | orchestrator | 2026-04-05 01:30:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:41.754314 | orchestrator | 2026-04-05 01:30:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:41.754450 | orchestrator | 2026-04-05 01:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:44.799267 | orchestrator | 2026-04-05 01:30:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:44.799872 | orchestrator | 2026-04-05 01:30:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:44.799890 | orchestrator | 2026-04-05 01:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:47.849944 | orchestrator | 2026-04-05 01:30:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:47.851623 | orchestrator | 2026-04-05 01:30:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:47.851713 | orchestrator | 2026-04-05 01:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:50.907338 | orchestrator | 2026-04-05 01:30:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:50.908944 | orchestrator | 2026-04-05 01:30:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:50.909015 | orchestrator | 2026-04-05 01:30:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:53.957123 | orchestrator | 2026-04-05 01:30:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:53.959048 | orchestrator | 2026-04-05 01:30:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:53.959087 | orchestrator | 2026-04-05 01:30:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:30:57.008302 | orchestrator | 2026-04-05 01:30:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:30:57.008762 | orchestrator | 2026-04-05 01:30:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:30:57.008792 | orchestrator | 2026-04-05 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:00.057054 | orchestrator | 2026-04-05 01:31:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:00.058483 | orchestrator | 2026-04-05 01:31:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:00.058593 | orchestrator | 2026-04-05 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:03.108858 | orchestrator | 2026-04-05 01:31:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:03.111152 | orchestrator | 2026-04-05 01:31:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:03.111214 | orchestrator | 2026-04-05 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:06.160695 | orchestrator | 2026-04-05 01:31:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:06.161376 | orchestrator | 2026-04-05 01:31:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:06.161439 | orchestrator | 2026-04-05 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:09.214841 | orchestrator | 2026-04-05 01:31:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:09.217929 | orchestrator | 2026-04-05 01:31:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:09.218005 | orchestrator | 2026-04-05 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:12.263968 | orchestrator | 2026-04-05 01:31:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:12.264635 | orchestrator | 2026-04-05 01:31:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:12.264671 | orchestrator | 2026-04-05 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:15.313382 | orchestrator | 2026-04-05 01:31:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:15.313911 | orchestrator | 2026-04-05 01:31:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:15.313986 | orchestrator | 2026-04-05 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:18.362996 | orchestrator | 2026-04-05 01:31:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:18.365677 | orchestrator | 2026-04-05 01:31:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:18.365731 | orchestrator | 2026-04-05 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:21.411395 | orchestrator | 2026-04-05 01:31:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:21.413140 | orchestrator | 2026-04-05 01:31:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:21.413411 | orchestrator | 2026-04-05 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:24.463806 | orchestrator | 2026-04-05 01:31:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:24.465325 | orchestrator | 2026-04-05 01:31:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:24.465391 | orchestrator | 2026-04-05 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:27.517744 | orchestrator | 2026-04-05 01:31:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:27.520309 | orchestrator | 2026-04-05 01:31:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:27.520378 | orchestrator | 2026-04-05 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:30.571011 | orchestrator | 2026-04-05 01:31:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:30.572618 | orchestrator | 2026-04-05 01:31:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:30.572666 | orchestrator | 2026-04-05 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:33.627506 | orchestrator | 2026-04-05 01:31:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:33.629142 | orchestrator | 2026-04-05 01:31:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:33.629232 | orchestrator | 2026-04-05 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:36.673807 | orchestrator | 2026-04-05 01:31:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:36.674610 | orchestrator | 2026-04-05 01:31:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:36.674680 | orchestrator | 2026-04-05 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:39.728248 | orchestrator | 2026-04-05 01:31:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:39.728670 | orchestrator | 2026-04-05 01:31:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:39.728748 | orchestrator | 2026-04-05 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:42.780053 | orchestrator | 2026-04-05 01:31:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:42.781020 | orchestrator | 2026-04-05 01:31:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:42.781054 | orchestrator | 2026-04-05 01:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:45.831664 | orchestrator | 2026-04-05 01:31:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:45.833214 | orchestrator | 2026-04-05 01:31:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:45.833336 | orchestrator | 2026-04-05 01:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:48.864602 | orchestrator | 2026-04-05 01:31:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:48.867048 | orchestrator | 2026-04-05 01:31:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:48.867109 | orchestrator | 2026-04-05 01:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:51.915670 | orchestrator | 2026-04-05 01:31:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:51.917051 | orchestrator | 2026-04-05 01:31:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:51.917106 | orchestrator | 2026-04-05 01:31:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:54.973988 | orchestrator | 2026-04-05 01:31:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:54.975918 | orchestrator | 2026-04-05 01:31:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:54.975999 | orchestrator | 2026-04-05 01:31:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:31:58.037539 | orchestrator | 2026-04-05 01:31:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:31:58.039605 | orchestrator | 2026-04-05 01:31:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:31:58.039666 | orchestrator | 2026-04-05 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:01.083429 | orchestrator | 2026-04-05 01:32:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:01.087761 | orchestrator | 2026-04-05 01:32:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:01.087849 | orchestrator | 2026-04-05 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:04.140417 | orchestrator | 2026-04-05 01:32:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:04.144184 | orchestrator | 2026-04-05 01:32:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:04.145241 | orchestrator | 2026-04-05 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:07.197534 | orchestrator | 2026-04-05 01:32:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:07.201522 | orchestrator | 2026-04-05 01:32:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:07.201584 | orchestrator | 2026-04-05 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:10.248317 | orchestrator | 2026-04-05 01:32:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:10.249651 | orchestrator | 2026-04-05 01:32:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:10.249697 | orchestrator | 2026-04-05 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:13.298656 | orchestrator | 2026-04-05 01:32:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:13.300446 | orchestrator | 2026-04-05 01:32:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:13.300489 | orchestrator | 2026-04-05 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:16.358690 | orchestrator | 2026-04-05 01:32:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:16.360624 | orchestrator | 2026-04-05 01:32:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:16.360669 | orchestrator | 2026-04-05 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:19.407247 | orchestrator | 2026-04-05 01:32:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:19.409473 | orchestrator | 2026-04-05 01:32:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:19.409511 | orchestrator | 2026-04-05 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:22.467359 | orchestrator | 2026-04-05 01:32:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:22.469835 | orchestrator | 2026-04-05 01:32:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:22.469910 | orchestrator | 2026-04-05 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:25.516052 | orchestrator | 2026-04-05 01:32:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:25.518073 | orchestrator | 2026-04-05 01:32:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:25.518199 | orchestrator | 2026-04-05 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:28.574102 | orchestrator | 2026-04-05 01:32:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:28.575940 | orchestrator | 2026-04-05 01:32:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:28.576027 | orchestrator | 2026-04-05 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:31.624369 | orchestrator | 2026-04-05 01:32:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:31.627349 | orchestrator | 2026-04-05 01:32:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:31.627387 | orchestrator | 2026-04-05 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:34.672336 | orchestrator | 2026-04-05 01:32:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:34.674463 | orchestrator | 2026-04-05 01:32:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:34.674546 | orchestrator | 2026-04-05 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:37.732875 | orchestrator | 2026-04-05 01:32:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:37.734970 | orchestrator | 2026-04-05 01:32:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:37.735033 | orchestrator | 2026-04-05 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:40.782184 | orchestrator | 2026-04-05 01:32:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:40.784159 | orchestrator | 2026-04-05 01:32:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:40.784348 | orchestrator | 2026-04-05 01:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:43.835759 | orchestrator | 2026-04-05 01:32:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:43.838183 | orchestrator | 2026-04-05 01:32:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:43.838243 | orchestrator | 2026-04-05 01:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:46.882663 | orchestrator | 2026-04-05 01:32:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:46.884526 | orchestrator | 2026-04-05 01:32:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:46.884588 | orchestrator | 2026-04-05 01:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:49.927104 | orchestrator | 2026-04-05 01:32:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:49.927828 | orchestrator | 2026-04-05 01:32:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:49.927859 | orchestrator | 2026-04-05 01:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:52.973613 | orchestrator | 2026-04-05 01:32:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:52.974863 | orchestrator | 2026-04-05 01:32:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:52.974970 | orchestrator | 2026-04-05 01:32:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:56.029285 | orchestrator | 2026-04-05 01:32:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:56.032591 | orchestrator | 2026-04-05 01:32:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:56.032676 | orchestrator | 2026-04-05 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:32:59.074548 | orchestrator | 2026-04-05 01:32:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:32:59.076059 | orchestrator | 2026-04-05 01:32:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:32:59.076298 | orchestrator | 2026-04-05 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:02.130092 | orchestrator | 2026-04-05 01:33:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:02.130935 | orchestrator | 2026-04-05 01:33:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:02.131029 | orchestrator | 2026-04-05 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:05.182768 | orchestrator | 2026-04-05 01:33:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:05.183900 | orchestrator | 2026-04-05 01:33:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:05.183949 | orchestrator | 2026-04-05 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:08.228214 | orchestrator | 2026-04-05 01:33:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:08.230564 | orchestrator | 2026-04-05 01:33:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:08.230687 | orchestrator | 2026-04-05 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:11.277864 | orchestrator | 2026-04-05 01:33:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:11.279002 | orchestrator | 2026-04-05 01:33:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:11.279053 | orchestrator | 2026-04-05 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:14.332441 | orchestrator | 2026-04-05 01:33:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:14.336280 | orchestrator | 2026-04-05 01:33:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:14.336432 | orchestrator | 2026-04-05 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:17.384133 | orchestrator | 2026-04-05 01:33:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:17.386709 | orchestrator | 2026-04-05 01:33:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:17.386798 | orchestrator | 2026-04-05 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:20.429539 | orchestrator | 2026-04-05 01:33:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:20.429643 | orchestrator | 2026-04-05 01:33:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:20.429659 | orchestrator | 2026-04-05 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:23.485454 | orchestrator | 2026-04-05 01:33:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:23.487526 | orchestrator | 2026-04-05 01:33:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:23.487603 | orchestrator | 2026-04-05 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:26.537032 | orchestrator | 2026-04-05 01:33:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:26.538223 | orchestrator | 2026-04-05 01:33:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:26.538251 | orchestrator | 2026-04-05 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:29.587821 | orchestrator | 2026-04-05 01:33:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:29.589067 | orchestrator | 2026-04-05 01:33:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:29.589101 | orchestrator | 2026-04-05 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:32.632136 | orchestrator | 2026-04-05 01:33:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:32.632924 | orchestrator | 2026-04-05 01:33:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:32.632958 | orchestrator | 2026-04-05 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:35.677434 | orchestrator | 2026-04-05 01:33:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:35.679031 | orchestrator | 2026-04-05 01:33:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:35.679078 | orchestrator | 2026-04-05 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:38.726286 | orchestrator | 2026-04-05 01:33:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:38.727380 | orchestrator | 2026-04-05 01:33:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:38.727418 | orchestrator | 2026-04-05 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:41.778906 | orchestrator | 2026-04-05 01:33:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:41.781808 | orchestrator | 2026-04-05 01:33:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:41.781896 | orchestrator | 2026-04-05 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:44.831434 | orchestrator | 2026-04-05 01:33:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:44.835132 | orchestrator | 2026-04-05 01:33:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:44.835293 | orchestrator | 2026-04-05 01:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:47.888038 | orchestrator | 2026-04-05 01:33:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:47.892106 | orchestrator | 2026-04-05 01:33:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:47.892207 | orchestrator | 2026-04-05 01:33:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:50.951790 | orchestrator | 2026-04-05 01:33:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:50.953876 | orchestrator | 2026-04-05 01:33:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:50.954082 | orchestrator | 2026-04-05 01:33:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:54.006683 | orchestrator | 2026-04-05 01:33:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:54.007889 | orchestrator | 2026-04-05 01:33:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:54.007927 | orchestrator | 2026-04-05 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:33:57.055589 | orchestrator | 2026-04-05 01:33:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:33:57.059100 | orchestrator | 2026-04-05 01:33:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:33:57.059974 | orchestrator | 2026-04-05 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:00.110661 | orchestrator | 2026-04-05 01:34:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:00.112095 | orchestrator | 2026-04-05 01:34:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:00.112121 | orchestrator | 2026-04-05 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:03.165489 | orchestrator | 2026-04-05 01:34:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:03.168458 | orchestrator | 2026-04-05 01:34:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:03.168632 | orchestrator | 2026-04-05 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:06.216912 | orchestrator | 2026-04-05 01:34:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:06.218570 | orchestrator | 2026-04-05 01:34:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:06.218630 | orchestrator | 2026-04-05 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:09.273848 | orchestrator | 2026-04-05 01:34:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:09.275756 | orchestrator | 2026-04-05 01:34:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:09.275821 | orchestrator | 2026-04-05 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:12.327249 | orchestrator | 2026-04-05 01:34:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:12.328317 | orchestrator | 2026-04-05 01:34:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:12.328447 | orchestrator | 2026-04-05 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:15.375196 | orchestrator | 2026-04-05 01:34:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:15.376318 | orchestrator | 2026-04-05 01:34:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:15.376393 | orchestrator | 2026-04-05 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:18.423906 | orchestrator | 2026-04-05 01:34:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:18.425874 | orchestrator | 2026-04-05 01:34:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:18.425956 | orchestrator | 2026-04-05 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:21.475758 | orchestrator | 2026-04-05 01:34:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:21.476658 | orchestrator | 2026-04-05 01:34:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:21.476697 | orchestrator | 2026-04-05 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:24.521671 | orchestrator | 2026-04-05 01:34:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:24.523655 | orchestrator | 2026-04-05 01:34:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:24.523730 | orchestrator | 2026-04-05 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:27.572000 | orchestrator | 2026-04-05 01:34:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:27.573598 | orchestrator | 2026-04-05 01:34:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:27.573670 | orchestrator | 2026-04-05 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:30.620521 | orchestrator | 2026-04-05 01:34:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:30.623651 | orchestrator | 2026-04-05 01:34:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:30.623711 | orchestrator | 2026-04-05 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:33.674105 | orchestrator | 2026-04-05 01:34:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:33.675763 | orchestrator | 2026-04-05 01:34:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:33.675790 | orchestrator | 2026-04-05 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:36.726719 | orchestrator | 2026-04-05 01:34:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:36.728305 | orchestrator | 2026-04-05 01:34:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:36.728361 | orchestrator | 2026-04-05 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:39.777052 | orchestrator | 2026-04-05 01:34:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:39.779855 | orchestrator | 2026-04-05 01:34:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:39.779926 | orchestrator | 2026-04-05 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:42.822948 | orchestrator | 2026-04-05 01:34:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:42.824839 | orchestrator | 2026-04-05 01:34:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:42.824917 | orchestrator | 2026-04-05 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:45.869579 | orchestrator | 2026-04-05 01:34:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:45.871306 | orchestrator | 2026-04-05 01:34:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:45.871342 | orchestrator | 2026-04-05 01:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:48.919231 | orchestrator | 2026-04-05 01:34:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:48.920857 | orchestrator | 2026-04-05 01:34:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:48.920901 | orchestrator | 2026-04-05 01:34:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:51.974749 | orchestrator | 2026-04-05 01:34:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:51.977153 | orchestrator | 2026-04-05 01:34:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:51.977363 | orchestrator | 2026-04-05 01:34:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:55.032266 | orchestrator | 2026-04-05 01:34:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:55.036129 | orchestrator | 2026-04-05 01:34:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:55.036289 | orchestrator | 2026-04-05 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:34:58.086177 | orchestrator | 2026-04-05 01:34:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:34:58.087148 | orchestrator | 2026-04-05 01:34:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:34:58.087204 | orchestrator | 2026-04-05 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:01.128444 | orchestrator | 2026-04-05 01:35:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:01.131569 | orchestrator | 2026-04-05 01:35:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:01.131776 | orchestrator | 2026-04-05 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:04.186108 | orchestrator | 2026-04-05 01:35:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:04.188709 | orchestrator | 2026-04-05 01:35:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:04.188833 | orchestrator | 2026-04-05 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:07.238011 | orchestrator | 2026-04-05 01:35:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:07.240518 | orchestrator | 2026-04-05 01:35:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:07.240568 | orchestrator | 2026-04-05 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:10.289320 | orchestrator | 2026-04-05 01:35:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:10.291212 | orchestrator | 2026-04-05 01:35:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:10.291263 | orchestrator | 2026-04-05 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:13.346498 | orchestrator | 2026-04-05 01:35:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:13.346969 | orchestrator | 2026-04-05 01:35:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:13.347002 | orchestrator | 2026-04-05 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:16.392590 | orchestrator | 2026-04-05 01:35:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:16.394693 | orchestrator | 2026-04-05 01:35:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:16.394999 | orchestrator | 2026-04-05 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:19.452902 | orchestrator | 2026-04-05 01:35:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:19.454543 | orchestrator | 2026-04-05 01:35:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:19.454609 | orchestrator | 2026-04-05 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:22.505938 | orchestrator | 2026-04-05 01:35:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:22.507684 | orchestrator | 2026-04-05 01:35:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:22.507738 | orchestrator | 2026-04-05 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:25.559182 | orchestrator | 2026-04-05 01:35:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:25.559858 | orchestrator | 2026-04-05 01:35:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:25.559893 | orchestrator | 2026-04-05 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:28.606701 | orchestrator | 2026-04-05 01:35:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:28.608689 | orchestrator | 2026-04-05 01:35:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:28.608778 | orchestrator | 2026-04-05 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:31.663651 | orchestrator | 2026-04-05 01:35:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:31.665476 | orchestrator | 2026-04-05 01:35:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:31.665512 | orchestrator | 2026-04-05 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:34.711508 | orchestrator | 2026-04-05 01:35:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:34.713694 | orchestrator | 2026-04-05 01:35:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:34.713750 | orchestrator | 2026-04-05 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:37.760800 | orchestrator | 2026-04-05 01:35:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:37.762196 | orchestrator | 2026-04-05 01:35:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:37.762260 | orchestrator | 2026-04-05 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:40.815123 | orchestrator | 2026-04-05 01:35:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:40.816459 | orchestrator | 2026-04-05 01:35:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:40.816494 | orchestrator | 2026-04-05 01:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:43.864327 | orchestrator | 2026-04-05 01:35:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:43.865594 | orchestrator | 2026-04-05 01:35:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:43.865711 | orchestrator | 2026-04-05 01:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:46.917268 | orchestrator | 2026-04-05 01:35:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:46.918729 | orchestrator | 2026-04-05 01:35:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:46.918953 | orchestrator | 2026-04-05 01:35:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:49.970678 | orchestrator | 2026-04-05 01:35:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:49.973135 | orchestrator | 2026-04-05 01:35:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:49.973240 | orchestrator | 2026-04-05 01:35:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:53.033631 | orchestrator | 2026-04-05 01:35:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:53.035151 | orchestrator | 2026-04-05 01:35:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:53.036649 | orchestrator | 2026-04-05 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:56.091083 | orchestrator | 2026-04-05 01:35:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:56.093954 | orchestrator | 2026-04-05 01:35:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:56.094094 | orchestrator | 2026-04-05 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:35:59.143889 | orchestrator | 2026-04-05 01:35:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:35:59.145897 | orchestrator | 2026-04-05 01:35:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:35:59.145955 | orchestrator | 2026-04-05 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:02.195804 | orchestrator | 2026-04-05 01:36:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:02.199041 | orchestrator | 2026-04-05 01:36:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:02.199130 | orchestrator | 2026-04-05 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:05.251752 | orchestrator | 2026-04-05 01:36:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:05.253803 | orchestrator | 2026-04-05 01:36:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:05.254302 | orchestrator | 2026-04-05 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:08.312197 | orchestrator | 2026-04-05 01:36:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:08.315216 | orchestrator | 2026-04-05 01:36:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:08.315257 | orchestrator | 2026-04-05 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:11.367745 | orchestrator | 2026-04-05 01:36:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:11.373700 | orchestrator | 2026-04-05 01:36:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:11.373780 | orchestrator | 2026-04-05 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:14.429014 | orchestrator | 2026-04-05 01:36:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:14.429886 | orchestrator | 2026-04-05 01:36:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:14.429944 | orchestrator | 2026-04-05 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:17.474067 | orchestrator | 2026-04-05 01:36:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:17.475493 | orchestrator | 2026-04-05 01:36:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:17.475605 | orchestrator | 2026-04-05 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:20.526987 | orchestrator | 2026-04-05 01:36:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:20.530122 | orchestrator | 2026-04-05 01:36:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:20.530448 | orchestrator | 2026-04-05 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:23.584946 | orchestrator | 2026-04-05 01:36:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:23.585946 | orchestrator | 2026-04-05 01:36:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:23.586089 | orchestrator | 2026-04-05 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:26.646737 | orchestrator | 2026-04-05 01:36:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:26.649062 | orchestrator | 2026-04-05 01:36:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:26.649257 | orchestrator | 2026-04-05 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:29.703110 | orchestrator | 2026-04-05 01:36:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:29.704909 | orchestrator | 2026-04-05 01:36:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:29.704960 | orchestrator | 2026-04-05 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:32.756502 | orchestrator | 2026-04-05 01:36:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:32.759228 | orchestrator | 2026-04-05 01:36:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:32.759276 | orchestrator | 2026-04-05 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:35.803871 | orchestrator | 2026-04-05 01:36:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:35.805554 | orchestrator | 2026-04-05 01:36:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:35.805608 | orchestrator | 2026-04-05 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:38.856953 | orchestrator | 2026-04-05 01:36:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:38.858925 | orchestrator | 2026-04-05 01:36:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:38.859005 | orchestrator | 2026-04-05 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:41.907599 | orchestrator | 2026-04-05 01:36:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:41.908692 | orchestrator | 2026-04-05 01:36:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:41.908747 | orchestrator | 2026-04-05 01:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:44.956735 | orchestrator | 2026-04-05 01:36:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:44.958258 | orchestrator | 2026-04-05 01:36:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:44.958315 | orchestrator | 2026-04-05 01:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:48.005999 | orchestrator | 2026-04-05 01:36:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:48.007600 | orchestrator | 2026-04-05 01:36:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:48.007760 | orchestrator | 2026-04-05 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:51.058716 | orchestrator | 2026-04-05 01:36:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:51.060641 | orchestrator | 2026-04-05 01:36:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:51.060726 | orchestrator | 2026-04-05 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:54.113682 | orchestrator | 2026-04-05 01:36:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:54.114676 | orchestrator | 2026-04-05 01:36:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:54.114808 | orchestrator | 2026-04-05 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:36:57.167612 | orchestrator | 2026-04-05 01:36:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:36:57.168959 | orchestrator | 2026-04-05 01:36:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:36:57.169052 | orchestrator | 2026-04-05 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:00.215879 | orchestrator | 2026-04-05 01:37:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:00.217080 | orchestrator | 2026-04-05 01:37:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:00.217119 | orchestrator | 2026-04-05 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:03.265715 | orchestrator | 2026-04-05 01:37:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:03.267884 | orchestrator | 2026-04-05 01:37:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:03.268031 | orchestrator | 2026-04-05 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:06.316737 | orchestrator | 2026-04-05 01:37:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:06.318124 | orchestrator | 2026-04-05 01:37:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:06.318173 | orchestrator | 2026-04-05 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:09.367029 | orchestrator | 2026-04-05 01:37:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:09.368138 | orchestrator | 2026-04-05 01:37:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:09.368180 | orchestrator | 2026-04-05 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:12.416588 | orchestrator | 2026-04-05 01:37:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:12.418805 | orchestrator | 2026-04-05 01:37:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:12.418838 | orchestrator | 2026-04-05 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:15.461462 | orchestrator | 2026-04-05 01:37:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:15.463126 | orchestrator | 2026-04-05 01:37:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:15.463229 | orchestrator | 2026-04-05 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:18.517903 | orchestrator | 2026-04-05 01:37:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:18.519623 | orchestrator | 2026-04-05 01:37:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:18.519806 | orchestrator | 2026-04-05 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:21.567717 | orchestrator | 2026-04-05 01:37:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:21.568498 | orchestrator | 2026-04-05 01:37:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:21.568534 | orchestrator | 2026-04-05 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:24.617654 | orchestrator | 2026-04-05 01:37:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:24.621077 | orchestrator | 2026-04-05 01:37:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:24.621127 | orchestrator | 2026-04-05 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:27.675171 | orchestrator | 2026-04-05 01:37:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:27.679077 | orchestrator | 2026-04-05 01:37:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:27.680383 | orchestrator | 2026-04-05 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:30.728978 | orchestrator | 2026-04-05 01:37:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:30.730729 | orchestrator | 2026-04-05 01:37:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:30.730806 | orchestrator | 2026-04-05 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:33.777157 | orchestrator | 2026-04-05 01:37:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:33.777849 | orchestrator | 2026-04-05 01:37:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:33.777885 | orchestrator | 2026-04-05 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:36.822308 | orchestrator | 2026-04-05 01:37:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:36.823151 | orchestrator | 2026-04-05 01:37:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:36.823213 | orchestrator | 2026-04-05 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:39.872904 | orchestrator | 2026-04-05 01:37:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:39.875389 | orchestrator | 2026-04-05 01:37:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:39.875442 | orchestrator | 2026-04-05 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:42.927802 | orchestrator | 2026-04-05 01:37:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:42.928239 | orchestrator | 2026-04-05 01:37:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:42.928276 | orchestrator | 2026-04-05 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:45.978670 | orchestrator | 2026-04-05 01:37:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:45.980626 | orchestrator | 2026-04-05 01:37:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:45.980677 | orchestrator | 2026-04-05 01:37:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:49.025004 | orchestrator | 2026-04-05 01:37:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:49.027385 | orchestrator | 2026-04-05 01:37:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:49.027541 | orchestrator | 2026-04-05 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:52.080263 | orchestrator | 2026-04-05 01:37:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:52.080995 | orchestrator | 2026-04-05 01:37:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:52.081011 | orchestrator | 2026-04-05 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:55.133852 | orchestrator | 2026-04-05 01:37:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:55.135714 | orchestrator | 2026-04-05 01:37:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:55.135788 | orchestrator | 2026-04-05 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:37:58.176114 | orchestrator | 2026-04-05 01:37:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:37:58.177666 | orchestrator | 2026-04-05 01:37:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:37:58.177713 | orchestrator | 2026-04-05 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:01.219690 | orchestrator | 2026-04-05 01:38:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:01.220571 | orchestrator | 2026-04-05 01:38:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:01.220621 | orchestrator | 2026-04-05 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:04.270812 | orchestrator | 2026-04-05 01:38:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:04.273992 | orchestrator | 2026-04-05 01:38:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:04.274086 | orchestrator | 2026-04-05 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:07.322878 | orchestrator | 2026-04-05 01:38:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:07.327823 | orchestrator | 2026-04-05 01:38:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:07.327911 | orchestrator | 2026-04-05 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:10.380968 | orchestrator | 2026-04-05 01:38:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:10.382586 | orchestrator | 2026-04-05 01:38:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:10.382639 | orchestrator | 2026-04-05 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:13.428990 | orchestrator | 2026-04-05 01:38:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:13.431178 | orchestrator | 2026-04-05 01:38:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:13.431477 | orchestrator | 2026-04-05 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:16.478694 | orchestrator | 2026-04-05 01:38:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:16.480160 | orchestrator | 2026-04-05 01:38:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:16.480206 | orchestrator | 2026-04-05 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:19.525160 | orchestrator | 2026-04-05 01:38:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:19.525601 | orchestrator | 2026-04-05 01:38:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:19.525666 | orchestrator | 2026-04-05 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:22.578126 | orchestrator | 2026-04-05 01:38:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:22.579557 | orchestrator | 2026-04-05 01:38:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:22.579623 | orchestrator | 2026-04-05 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:25.631929 | orchestrator | 2026-04-05 01:38:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:25.634358 | orchestrator | 2026-04-05 01:38:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:25.634437 | orchestrator | 2026-04-05 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:28.682819 | orchestrator | 2026-04-05 01:38:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:28.684579 | orchestrator | 2026-04-05 01:38:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:28.684735 | orchestrator | 2026-04-05 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:31.734272 | orchestrator | 2026-04-05 01:38:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:31.736103 | orchestrator | 2026-04-05 01:38:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:31.736180 | orchestrator | 2026-04-05 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:34.788886 | orchestrator | 2026-04-05 01:38:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:34.789597 | orchestrator | 2026-04-05 01:38:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:34.789667 | orchestrator | 2026-04-05 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:37.834820 | orchestrator | 2026-04-05 01:38:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:37.836560 | orchestrator | 2026-04-05 01:38:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:37.836593 | orchestrator | 2026-04-05 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:40.880697 | orchestrator | 2026-04-05 01:38:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:40.881579 | orchestrator | 2026-04-05 01:38:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:40.881614 | orchestrator | 2026-04-05 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:43.935651 | orchestrator | 2026-04-05 01:38:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:43.936561 | orchestrator | 2026-04-05 01:38:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:43.936705 | orchestrator | 2026-04-05 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:46.980743 | orchestrator | 2026-04-05 01:38:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:46.982797 | orchestrator | 2026-04-05 01:38:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:46.982838 | orchestrator | 2026-04-05 01:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:50.035826 | orchestrator | 2026-04-05 01:38:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:50.035985 | orchestrator | 2026-04-05 01:38:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:50.036011 | orchestrator | 2026-04-05 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:53.082969 | orchestrator | 2026-04-05 01:38:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:53.085244 | orchestrator | 2026-04-05 01:38:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:53.085291 | orchestrator | 2026-04-05 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:56.136891 | orchestrator | 2026-04-05 01:38:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:56.138464 | orchestrator | 2026-04-05 01:38:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:56.138525 | orchestrator | 2026-04-05 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:38:59.190417 | orchestrator | 2026-04-05 01:38:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:38:59.194887 | orchestrator | 2026-04-05 01:38:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:38:59.194967 | orchestrator | 2026-04-05 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:02.249137 | orchestrator | 2026-04-05 01:39:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:02.250519 | orchestrator | 2026-04-05 01:39:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:02.251124 | orchestrator | 2026-04-05 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:05.297508 | orchestrator | 2026-04-05 01:39:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:05.298240 | orchestrator | 2026-04-05 01:39:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:05.298553 | orchestrator | 2026-04-05 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:08.342168 | orchestrator | 2026-04-05 01:39:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:08.343094 | orchestrator | 2026-04-05 01:39:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:08.343267 | orchestrator | 2026-04-05 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:11.387718 | orchestrator | 2026-04-05 01:39:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:11.390395 | orchestrator | 2026-04-05 01:39:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:11.390460 | orchestrator | 2026-04-05 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:14.437806 | orchestrator | 2026-04-05 01:39:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:14.441165 | orchestrator | 2026-04-05 01:39:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:14.441242 | orchestrator | 2026-04-05 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:17.495046 | orchestrator | 2026-04-05 01:39:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:17.497163 | orchestrator | 2026-04-05 01:39:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:17.497373 | orchestrator | 2026-04-05 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:20.549693 | orchestrator | 2026-04-05 01:39:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:20.550635 | orchestrator | 2026-04-05 01:39:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:20.550669 | orchestrator | 2026-04-05 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:23.597813 | orchestrator | 2026-04-05 01:39:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:23.599403 | orchestrator | 2026-04-05 01:39:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:23.599484 | orchestrator | 2026-04-05 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:26.646973 | orchestrator | 2026-04-05 01:39:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:26.649216 | orchestrator | 2026-04-05 01:39:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:26.649261 | orchestrator | 2026-04-05 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:29.700933 | orchestrator | 2026-04-05 01:39:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:29.702350 | orchestrator | 2026-04-05 01:39:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:29.702408 | orchestrator | 2026-04-05 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:32.753089 | orchestrator | 2026-04-05 01:39:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:32.754984 | orchestrator | 2026-04-05 01:39:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:32.755043 | orchestrator | 2026-04-05 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:35.805869 | orchestrator | 2026-04-05 01:39:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:35.808963 | orchestrator | 2026-04-05 01:39:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:35.809061 | orchestrator | 2026-04-05 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:38.852036 | orchestrator | 2026-04-05 01:39:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:38.853754 | orchestrator | 2026-04-05 01:39:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:38.853832 | orchestrator | 2026-04-05 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:41.899399 | orchestrator | 2026-04-05 01:39:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:41.902127 | orchestrator | 2026-04-05 01:39:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:41.902203 | orchestrator | 2026-04-05 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:44.951006 | orchestrator | 2026-04-05 01:39:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:44.953620 | orchestrator | 2026-04-05 01:39:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:44.953709 | orchestrator | 2026-04-05 01:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:48.009833 | orchestrator | 2026-04-05 01:39:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:48.011985 | orchestrator | 2026-04-05 01:39:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:48.012063 | orchestrator | 2026-04-05 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:51.059595 | orchestrator | 2026-04-05 01:39:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:51.061133 | orchestrator | 2026-04-05 01:39:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:51.061169 | orchestrator | 2026-04-05 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:54.118687 | orchestrator | 2026-04-05 01:39:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:54.122795 | orchestrator | 2026-04-05 01:39:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:54.122877 | orchestrator | 2026-04-05 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:39:57.170093 | orchestrator | 2026-04-05 01:39:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:39:57.172374 | orchestrator | 2026-04-05 01:39:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:39:57.172429 | orchestrator | 2026-04-05 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:00.221198 | orchestrator | 2026-04-05 01:40:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:00.221688 | orchestrator | 2026-04-05 01:40:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:00.221721 | orchestrator | 2026-04-05 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:03.273392 | orchestrator | 2026-04-05 01:40:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:03.275616 | orchestrator | 2026-04-05 01:40:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:03.275670 | orchestrator | 2026-04-05 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:06.323729 | orchestrator | 2026-04-05 01:40:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:06.325466 | orchestrator | 2026-04-05 01:40:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:06.325526 | orchestrator | 2026-04-05 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:09.371111 | orchestrator | 2026-04-05 01:40:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:09.372732 | orchestrator | 2026-04-05 01:40:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:09.372762 | orchestrator | 2026-04-05 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:12.419303 | orchestrator | 2026-04-05 01:40:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:12.422143 | orchestrator | 2026-04-05 01:40:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:12.422206 | orchestrator | 2026-04-05 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:15.471600 | orchestrator | 2026-04-05 01:40:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:15.474950 | orchestrator | 2026-04-05 01:40:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:15.475050 | orchestrator | 2026-04-05 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:18.526189 | orchestrator | 2026-04-05 01:40:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:18.528835 | orchestrator | 2026-04-05 01:40:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:18.528900 | orchestrator | 2026-04-05 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:21.583556 | orchestrator | 2026-04-05 01:40:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:21.586689 | orchestrator | 2026-04-05 01:40:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:21.586790 | orchestrator | 2026-04-05 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:24.635906 | orchestrator | 2026-04-05 01:40:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:24.638310 | orchestrator | 2026-04-05 01:40:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:24.638387 | orchestrator | 2026-04-05 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:27.685968 | orchestrator | 2026-04-05 01:40:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:27.688000 | orchestrator | 2026-04-05 01:40:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:27.688060 | orchestrator | 2026-04-05 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:30.736478 | orchestrator | 2026-04-05 01:40:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:30.738628 | orchestrator | 2026-04-05 01:40:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:30.738690 | orchestrator | 2026-04-05 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:33.790724 | orchestrator | 2026-04-05 01:40:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:33.792566 | orchestrator | 2026-04-05 01:40:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:33.792612 | orchestrator | 2026-04-05 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:36.840460 | orchestrator | 2026-04-05 01:40:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:36.841802 | orchestrator | 2026-04-05 01:40:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:36.841857 | orchestrator | 2026-04-05 01:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:39.890951 | orchestrator | 2026-04-05 01:40:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:39.892764 | orchestrator | 2026-04-05 01:40:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:39.892846 | orchestrator | 2026-04-05 01:40:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:42.935608 | orchestrator | 2026-04-05 01:40:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:42.938389 | orchestrator | 2026-04-05 01:40:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:42.938761 | orchestrator | 2026-04-05 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:45.989420 | orchestrator | 2026-04-05 01:40:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:45.991619 | orchestrator | 2026-04-05 01:40:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:45.991669 | orchestrator | 2026-04-05 01:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:49.031753 | orchestrator | 2026-04-05 01:40:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:49.032714 | orchestrator | 2026-04-05 01:40:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:49.032778 | orchestrator | 2026-04-05 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:52.082920 | orchestrator | 2026-04-05 01:40:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:52.089833 | orchestrator | 2026-04-05 01:40:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:52.089931 | orchestrator | 2026-04-05 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:55.140882 | orchestrator | 2026-04-05 01:40:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:55.143269 | orchestrator | 2026-04-05 01:40:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:55.143361 | orchestrator | 2026-04-05 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:40:58.197691 | orchestrator | 2026-04-05 01:40:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:40:58.200260 | orchestrator | 2026-04-05 01:40:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:40:58.200535 | orchestrator | 2026-04-05 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:01.248761 | orchestrator | 2026-04-05 01:41:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:01.250545 | orchestrator | 2026-04-05 01:41:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:01.250605 | orchestrator | 2026-04-05 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:04.302942 | orchestrator | 2026-04-05 01:41:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:04.305009 | orchestrator | 2026-04-05 01:41:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:04.305066 | orchestrator | 2026-04-05 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:07.355109 | orchestrator | 2026-04-05 01:41:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:07.356910 | orchestrator | 2026-04-05 01:41:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:07.356956 | orchestrator | 2026-04-05 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:10.408969 | orchestrator | 2026-04-05 01:41:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:10.411301 | orchestrator | 2026-04-05 01:41:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:10.411388 | orchestrator | 2026-04-05 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:13.457249 | orchestrator | 2026-04-05 01:41:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:13.459012 | orchestrator | 2026-04-05 01:41:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:13.459381 | orchestrator | 2026-04-05 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:16.511701 | orchestrator | 2026-04-05 01:41:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:16.514118 | orchestrator | 2026-04-05 01:41:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:16.514504 | orchestrator | 2026-04-05 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:19.558921 | orchestrator | 2026-04-05 01:41:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:19.560185 | orchestrator | 2026-04-05 01:41:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:19.560222 | orchestrator | 2026-04-05 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:22.613830 | orchestrator | 2026-04-05 01:41:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:22.615218 | orchestrator | 2026-04-05 01:41:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:22.615366 | orchestrator | 2026-04-05 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:25.663900 | orchestrator | 2026-04-05 01:41:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:25.665934 | orchestrator | 2026-04-05 01:41:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:25.665990 | orchestrator | 2026-04-05 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:28.712298 | orchestrator | 2026-04-05 01:41:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:28.714003 | orchestrator | 2026-04-05 01:41:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:28.714094 | orchestrator | 2026-04-05 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:31.756519 | orchestrator | 2026-04-05 01:41:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:31.757391 | orchestrator | 2026-04-05 01:41:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:31.757651 | orchestrator | 2026-04-05 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:34.802635 | orchestrator | 2026-04-05 01:41:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:34.805039 | orchestrator | 2026-04-05 01:41:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:34.805270 | orchestrator | 2026-04-05 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:37.862870 | orchestrator | 2026-04-05 01:41:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:37.863925 | orchestrator | 2026-04-05 01:41:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:37.864427 | orchestrator | 2026-04-05 01:41:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:40.915547 | orchestrator | 2026-04-05 01:41:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:40.917209 | orchestrator | 2026-04-05 01:41:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:40.917256 | orchestrator | 2026-04-05 01:41:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:43.972602 | orchestrator | 2026-04-05 01:41:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:43.973893 | orchestrator | 2026-04-05 01:41:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:43.973951 | orchestrator | 2026-04-05 01:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:47.027778 | orchestrator | 2026-04-05 01:41:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:47.030848 | orchestrator | 2026-04-05 01:41:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:47.030900 | orchestrator | 2026-04-05 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:50.080953 | orchestrator | 2026-04-05 01:41:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:50.082144 | orchestrator | 2026-04-05 01:41:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:50.082190 | orchestrator | 2026-04-05 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:53.143347 | orchestrator | 2026-04-05 01:41:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:53.147603 | orchestrator | 2026-04-05 01:41:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:53.147684 | orchestrator | 2026-04-05 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:56.202362 | orchestrator | 2026-04-05 01:41:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:56.204162 | orchestrator | 2026-04-05 01:41:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:56.204222 | orchestrator | 2026-04-05 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:41:59.252996 | orchestrator | 2026-04-05 01:41:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:41:59.256023 | orchestrator | 2026-04-05 01:41:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:41:59.256134 | orchestrator | 2026-04-05 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:02.307022 | orchestrator | 2026-04-05 01:42:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:02.308466 | orchestrator | 2026-04-05 01:42:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:02.308534 | orchestrator | 2026-04-05 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:05.358765 | orchestrator | 2026-04-05 01:42:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:05.359393 | orchestrator | 2026-04-05 01:42:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:05.359427 | orchestrator | 2026-04-05 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:08.399512 | orchestrator | 2026-04-05 01:42:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:08.400673 | orchestrator | 2026-04-05 01:42:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:08.400883 | orchestrator | 2026-04-05 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:11.444549 | orchestrator | 2026-04-05 01:42:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:11.445798 | orchestrator | 2026-04-05 01:42:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:11.445980 | orchestrator | 2026-04-05 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:14.490926 | orchestrator | 2026-04-05 01:42:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:14.491800 | orchestrator | 2026-04-05 01:42:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:14.492090 | orchestrator | 2026-04-05 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:17.542719 | orchestrator | 2026-04-05 01:42:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:17.545385 | orchestrator | 2026-04-05 01:42:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:17.545544 | orchestrator | 2026-04-05 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:20.598591 | orchestrator | 2026-04-05 01:42:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:20.599870 | orchestrator | 2026-04-05 01:42:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:20.599930 | orchestrator | 2026-04-05 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:23.648970 | orchestrator | 2026-04-05 01:42:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:23.650290 | orchestrator | 2026-04-05 01:42:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:23.650645 | orchestrator | 2026-04-05 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:26.709928 | orchestrator | 2026-04-05 01:42:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:26.711227 | orchestrator | 2026-04-05 01:42:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:26.711287 | orchestrator | 2026-04-05 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:29.759074 | orchestrator | 2026-04-05 01:42:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:29.761190 | orchestrator | 2026-04-05 01:42:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:29.761271 | orchestrator | 2026-04-05 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:32.813165 | orchestrator | 2026-04-05 01:42:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:32.814913 | orchestrator | 2026-04-05 01:42:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:32.815158 | orchestrator | 2026-04-05 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:35.864604 | orchestrator | 2026-04-05 01:42:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:35.866429 | orchestrator | 2026-04-05 01:42:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:35.866494 | orchestrator | 2026-04-05 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:38.908723 | orchestrator | 2026-04-05 01:42:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:38.910919 | orchestrator | 2026-04-05 01:42:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:38.911104 | orchestrator | 2026-04-05 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:41.954207 | orchestrator | 2026-04-05 01:42:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:41.956933 | orchestrator | 2026-04-05 01:42:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:41.956975 | orchestrator | 2026-04-05 01:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:45.013457 | orchestrator | 2026-04-05 01:42:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:45.016503 | orchestrator | 2026-04-05 01:42:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:45.016598 | orchestrator | 2026-04-05 01:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:48.065261 | orchestrator | 2026-04-05 01:42:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:48.068833 | orchestrator | 2026-04-05 01:42:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:48.068915 | orchestrator | 2026-04-05 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:51.120848 | orchestrator | 2026-04-05 01:42:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:51.121851 | orchestrator | 2026-04-05 01:42:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:51.121882 | orchestrator | 2026-04-05 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:54.176499 | orchestrator | 2026-04-05 01:42:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:54.179779 | orchestrator | 2026-04-05 01:42:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:54.179845 | orchestrator | 2026-04-05 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:42:57.226390 | orchestrator | 2026-04-05 01:42:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:42:57.227960 | orchestrator | 2026-04-05 01:42:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:42:57.228023 | orchestrator | 2026-04-05 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:00.279484 | orchestrator | 2026-04-05 01:43:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:00.280775 | orchestrator | 2026-04-05 01:43:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:00.280804 | orchestrator | 2026-04-05 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:03.331869 | orchestrator | 2026-04-05 01:43:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:03.334288 | orchestrator | 2026-04-05 01:43:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:03.334371 | orchestrator | 2026-04-05 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:06.382461 | orchestrator | 2026-04-05 01:43:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:06.384525 | orchestrator | 2026-04-05 01:43:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:06.384602 | orchestrator | 2026-04-05 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:09.428820 | orchestrator | 2026-04-05 01:43:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:09.430654 | orchestrator | 2026-04-05 01:43:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:09.430710 | orchestrator | 2026-04-05 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:12.486229 | orchestrator | 2026-04-05 01:43:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:12.488196 | orchestrator | 2026-04-05 01:43:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:12.488247 | orchestrator | 2026-04-05 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:15.537906 | orchestrator | 2026-04-05 01:43:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:15.539845 | orchestrator | 2026-04-05 01:43:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:15.539919 | orchestrator | 2026-04-05 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:18.588163 | orchestrator | 2026-04-05 01:43:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:18.589553 | orchestrator | 2026-04-05 01:43:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:18.589588 | orchestrator | 2026-04-05 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:21.637162 | orchestrator | 2026-04-05 01:43:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:21.638600 | orchestrator | 2026-04-05 01:43:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:21.638655 | orchestrator | 2026-04-05 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:24.686147 | orchestrator | 2026-04-05 01:43:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:24.687529 | orchestrator | 2026-04-05 01:43:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:24.687556 | orchestrator | 2026-04-05 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:27.731466 | orchestrator | 2026-04-05 01:43:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:27.734869 | orchestrator | 2026-04-05 01:43:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:27.734937 | orchestrator | 2026-04-05 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:30.777753 | orchestrator | 2026-04-05 01:43:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:30.779373 | orchestrator | 2026-04-05 01:43:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:30.779456 | orchestrator | 2026-04-05 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:33.833343 | orchestrator | 2026-04-05 01:43:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:33.918470 | orchestrator | 2026-04-05 01:43:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:33.918536 | orchestrator | 2026-04-05 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:36.890209 | orchestrator | 2026-04-05 01:43:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:36.890928 | orchestrator | 2026-04-05 01:43:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:36.891094 | orchestrator | 2026-04-05 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:39.951757 | orchestrator | 2026-04-05 01:43:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:39.956149 | orchestrator | 2026-04-05 01:43:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:39.956214 | orchestrator | 2026-04-05 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:43.008775 | orchestrator | 2026-04-05 01:43:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:43.011477 | orchestrator | 2026-04-05 01:43:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:43.011555 | orchestrator | 2026-04-05 01:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:46.058929 | orchestrator | 2026-04-05 01:43:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:46.060765 | orchestrator | 2026-04-05 01:43:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:46.060803 | orchestrator | 2026-04-05 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:49.103773 | orchestrator | 2026-04-05 01:43:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:49.104503 | orchestrator | 2026-04-05 01:43:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:49.104542 | orchestrator | 2026-04-05 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:52.159099 | orchestrator | 2026-04-05 01:43:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:52.161969 | orchestrator | 2026-04-05 01:43:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:52.162151 | orchestrator | 2026-04-05 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:55.206305 | orchestrator | 2026-04-05 01:43:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:55.208685 | orchestrator | 2026-04-05 01:43:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:55.208724 | orchestrator | 2026-04-05 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:43:58.251443 | orchestrator | 2026-04-05 01:43:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:43:58.252430 | orchestrator | 2026-04-05 01:43:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:43:58.252472 | orchestrator | 2026-04-05 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:01.296306 | orchestrator | 2026-04-05 01:44:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:01.299120 | orchestrator | 2026-04-05 01:44:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:01.299208 | orchestrator | 2026-04-05 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:04.345370 | orchestrator | 2026-04-05 01:44:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:04.346544 | orchestrator | 2026-04-05 01:44:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:04.346652 | orchestrator | 2026-04-05 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:07.390992 | orchestrator | 2026-04-05 01:44:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:07.391870 | orchestrator | 2026-04-05 01:44:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:07.391889 | orchestrator | 2026-04-05 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:10.439211 | orchestrator | 2026-04-05 01:44:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:10.439592 | orchestrator | 2026-04-05 01:44:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:10.439621 | orchestrator | 2026-04-05 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:13.486340 | orchestrator | 2026-04-05 01:44:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:13.488146 | orchestrator | 2026-04-05 01:44:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:13.488222 | orchestrator | 2026-04-05 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:16.531507 | orchestrator | 2026-04-05 01:44:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:16.533355 | orchestrator | 2026-04-05 01:44:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:16.533442 | orchestrator | 2026-04-05 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:19.579580 | orchestrator | 2026-04-05 01:44:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:19.581226 | orchestrator | 2026-04-05 01:44:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:19.581287 | orchestrator | 2026-04-05 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:22.628155 | orchestrator | 2026-04-05 01:44:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:22.630200 | orchestrator | 2026-04-05 01:44:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:22.630323 | orchestrator | 2026-04-05 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:25.670257 | orchestrator | 2026-04-05 01:44:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:25.671031 | orchestrator | 2026-04-05 01:44:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:25.671066 | orchestrator | 2026-04-05 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:28.715759 | orchestrator | 2026-04-05 01:44:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:28.716721 | orchestrator | 2026-04-05 01:44:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:28.716766 | orchestrator | 2026-04-05 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:31.766717 | orchestrator | 2026-04-05 01:44:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:31.772475 | orchestrator | 2026-04-05 01:44:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:31.772570 | orchestrator | 2026-04-05 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:34.821525 | orchestrator | 2026-04-05 01:44:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:34.823698 | orchestrator | 2026-04-05 01:44:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:34.823746 | orchestrator | 2026-04-05 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:37.882220 | orchestrator | 2026-04-05 01:44:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:37.883854 | orchestrator | 2026-04-05 01:44:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:37.883949 | orchestrator | 2026-04-05 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:40.930319 | orchestrator | 2026-04-05 01:44:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:40.933395 | orchestrator | 2026-04-05 01:44:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:40.933458 | orchestrator | 2026-04-05 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:43.988241 | orchestrator | 2026-04-05 01:44:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:43.990320 | orchestrator | 2026-04-05 01:44:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:43.990391 | orchestrator | 2026-04-05 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:47.042936 | orchestrator | 2026-04-05 01:44:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:47.044782 | orchestrator | 2026-04-05 01:44:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:47.044853 | orchestrator | 2026-04-05 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:50.097125 | orchestrator | 2026-04-05 01:44:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:50.098937 | orchestrator | 2026-04-05 01:44:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:50.099047 | orchestrator | 2026-04-05 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:53.156269 | orchestrator | 2026-04-05 01:44:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:53.159676 | orchestrator | 2026-04-05 01:44:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:53.159753 | orchestrator | 2026-04-05 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:56.201311 | orchestrator | 2026-04-05 01:44:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:56.203854 | orchestrator | 2026-04-05 01:44:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:56.203915 | orchestrator | 2026-04-05 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:44:59.249736 | orchestrator | 2026-04-05 01:44:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:44:59.252552 | orchestrator | 2026-04-05 01:44:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:44:59.252642 | orchestrator | 2026-04-05 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:02.302217 | orchestrator | 2026-04-05 01:45:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:02.303928 | orchestrator | 2026-04-05 01:45:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:02.304019 | orchestrator | 2026-04-05 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:05.349163 | orchestrator | 2026-04-05 01:45:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:05.350672 | orchestrator | 2026-04-05 01:45:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:05.350707 | orchestrator | 2026-04-05 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:08.400208 | orchestrator | 2026-04-05 01:45:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:08.402307 | orchestrator | 2026-04-05 01:45:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:08.402419 | orchestrator | 2026-04-05 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:11.444610 | orchestrator | 2026-04-05 01:45:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:11.447388 | orchestrator | 2026-04-05 01:45:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:11.447625 | orchestrator | 2026-04-05 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:14.498435 | orchestrator | 2026-04-05 01:45:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:14.499045 | orchestrator | 2026-04-05 01:45:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:14.499202 | orchestrator | 2026-04-05 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:17.551214 | orchestrator | 2026-04-05 01:45:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:17.552660 | orchestrator | 2026-04-05 01:45:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:17.552715 | orchestrator | 2026-04-05 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:20.598519 | orchestrator | 2026-04-05 01:45:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:20.601044 | orchestrator | 2026-04-05 01:45:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:20.601437 | orchestrator | 2026-04-05 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:23.653854 | orchestrator | 2026-04-05 01:45:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:23.656595 | orchestrator | 2026-04-05 01:45:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:23.656662 | orchestrator | 2026-04-05 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:26.699338 | orchestrator | 2026-04-05 01:45:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:26.700560 | orchestrator | 2026-04-05 01:45:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:26.700594 | orchestrator | 2026-04-05 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:29.749460 | orchestrator | 2026-04-05 01:45:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:29.751582 | orchestrator | 2026-04-05 01:45:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:29.751678 | orchestrator | 2026-04-05 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:32.797495 | orchestrator | 2026-04-05 01:45:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:32.800440 | orchestrator | 2026-04-05 01:45:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:32.800504 | orchestrator | 2026-04-05 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:35.839985 | orchestrator | 2026-04-05 01:45:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:35.841128 | orchestrator | 2026-04-05 01:45:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:35.841175 | orchestrator | 2026-04-05 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:38.886211 | orchestrator | 2026-04-05 01:45:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:38.890630 | orchestrator | 2026-04-05 01:45:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:38.890683 | orchestrator | 2026-04-05 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:41.932221 | orchestrator | 2026-04-05 01:45:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:41.934295 | orchestrator | 2026-04-05 01:45:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:41.934424 | orchestrator | 2026-04-05 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:44.979477 | orchestrator | 2026-04-05 01:45:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:44.980656 | orchestrator | 2026-04-05 01:45:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:44.980766 | orchestrator | 2026-04-05 01:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:48.025879 | orchestrator | 2026-04-05 01:45:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:48.027476 | orchestrator | 2026-04-05 01:45:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:48.027517 | orchestrator | 2026-04-05 01:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:51.068786 | orchestrator | 2026-04-05 01:45:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:51.070226 | orchestrator | 2026-04-05 01:45:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:51.070264 | orchestrator | 2026-04-05 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:54.125268 | orchestrator | 2026-04-05 01:45:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:54.126984 | orchestrator | 2026-04-05 01:45:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:54.127038 | orchestrator | 2026-04-05 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:45:57.179753 | orchestrator | 2026-04-05 01:45:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:45:57.181493 | orchestrator | 2026-04-05 01:45:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:45:57.181549 | orchestrator | 2026-04-05 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:00.218404 | orchestrator | 2026-04-05 01:46:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:00.219224 | orchestrator | 2026-04-05 01:46:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:00.219261 | orchestrator | 2026-04-05 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:03.258265 | orchestrator | 2026-04-05 01:46:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:03.259695 | orchestrator | 2026-04-05 01:46:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:03.259725 | orchestrator | 2026-04-05 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:06.307828 | orchestrator | 2026-04-05 01:46:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:06.309471 | orchestrator | 2026-04-05 01:46:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:06.309531 | orchestrator | 2026-04-05 01:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:09.354854 | orchestrator | 2026-04-05 01:46:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:09.356406 | orchestrator | 2026-04-05 01:46:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:09.356455 | orchestrator | 2026-04-05 01:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:12.390990 | orchestrator | 2026-04-05 01:46:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:12.391720 | orchestrator | 2026-04-05 01:46:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:12.391858 | orchestrator | 2026-04-05 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:15.438460 | orchestrator | 2026-04-05 01:46:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:15.440275 | orchestrator | 2026-04-05 01:46:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:15.440372 | orchestrator | 2026-04-05 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:18.489061 | orchestrator | 2026-04-05 01:46:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:18.490833 | orchestrator | 2026-04-05 01:46:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:18.490910 | orchestrator | 2026-04-05 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:21.533211 | orchestrator | 2026-04-05 01:46:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:21.536532 | orchestrator | 2026-04-05 01:46:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:21.536616 | orchestrator | 2026-04-05 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:24.586112 | orchestrator | 2026-04-05 01:46:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:24.588095 | orchestrator | 2026-04-05 01:46:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:24.588174 | orchestrator | 2026-04-05 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:27.639343 | orchestrator | 2026-04-05 01:46:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:27.640559 | orchestrator | 2026-04-05 01:46:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:27.640640 | orchestrator | 2026-04-05 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:30.695440 | orchestrator | 2026-04-05 01:46:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:30.697308 | orchestrator | 2026-04-05 01:46:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:30.697369 | orchestrator | 2026-04-05 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:33.741526 | orchestrator | 2026-04-05 01:46:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:33.742958 | orchestrator | 2026-04-05 01:46:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:33.743013 | orchestrator | 2026-04-05 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:36.800023 | orchestrator | 2026-04-05 01:46:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:36.801378 | orchestrator | 2026-04-05 01:46:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:36.801415 | orchestrator | 2026-04-05 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:39.849673 | orchestrator | 2026-04-05 01:46:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:39.852178 | orchestrator | 2026-04-05 01:46:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:39.852254 | orchestrator | 2026-04-05 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:42.901827 | orchestrator | 2026-04-05 01:46:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:42.903860 | orchestrator | 2026-04-05 01:46:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:42.903975 | orchestrator | 2026-04-05 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:45.958302 | orchestrator | 2026-04-05 01:46:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:45.959982 | orchestrator | 2026-04-05 01:46:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:45.960042 | orchestrator | 2026-04-05 01:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:49.005091 | orchestrator | 2026-04-05 01:46:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:49.007708 | orchestrator | 2026-04-05 01:46:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:49.007770 | orchestrator | 2026-04-05 01:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:52.060210 | orchestrator | 2026-04-05 01:46:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:52.062582 | orchestrator | 2026-04-05 01:46:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:52.062639 | orchestrator | 2026-04-05 01:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:55.105542 | orchestrator | 2026-04-05 01:46:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:55.107835 | orchestrator | 2026-04-05 01:46:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:55.107951 | orchestrator | 2026-04-05 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:46:58.152772 | orchestrator | 2026-04-05 01:46:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:46:58.154128 | orchestrator | 2026-04-05 01:46:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:46:58.154163 | orchestrator | 2026-04-05 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:01.196571 | orchestrator | 2026-04-05 01:47:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:01.198374 | orchestrator | 2026-04-05 01:47:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:01.198795 | orchestrator | 2026-04-05 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:04.234860 | orchestrator | 2026-04-05 01:47:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:04.236768 | orchestrator | 2026-04-05 01:47:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:04.236784 | orchestrator | 2026-04-05 01:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:07.280630 | orchestrator | 2026-04-05 01:47:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:07.284393 | orchestrator | 2026-04-05 01:47:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:07.284462 | orchestrator | 2026-04-05 01:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:10.328240 | orchestrator | 2026-04-05 01:47:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:10.329196 | orchestrator | 2026-04-05 01:47:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:10.329289 | orchestrator | 2026-04-05 01:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:13.378118 | orchestrator | 2026-04-05 01:47:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:13.379995 | orchestrator | 2026-04-05 01:47:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:13.380049 | orchestrator | 2026-04-05 01:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:16.425861 | orchestrator | 2026-04-05 01:47:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:16.429169 | orchestrator | 2026-04-05 01:47:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:16.429237 | orchestrator | 2026-04-05 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:19.483715 | orchestrator | 2026-04-05 01:47:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:19.486223 | orchestrator | 2026-04-05 01:47:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:19.486358 | orchestrator | 2026-04-05 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:22.548486 | orchestrator | 2026-04-05 01:47:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:22.550585 | orchestrator | 2026-04-05 01:47:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:22.550659 | orchestrator | 2026-04-05 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:25.609304 | orchestrator | 2026-04-05 01:47:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:25.611571 | orchestrator | 2026-04-05 01:47:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:25.611650 | orchestrator | 2026-04-05 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:28.673861 | orchestrator | 2026-04-05 01:47:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:28.675171 | orchestrator | 2026-04-05 01:47:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:28.675197 | orchestrator | 2026-04-05 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:31.721638 | orchestrator | 2026-04-05 01:47:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:31.725730 | orchestrator | 2026-04-05 01:47:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:31.726069 | orchestrator | 2026-04-05 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:34.776878 | orchestrator | 2026-04-05 01:47:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:34.778264 | orchestrator | 2026-04-05 01:47:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:34.778320 | orchestrator | 2026-04-05 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:37.826286 | orchestrator | 2026-04-05 01:47:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:37.828175 | orchestrator | 2026-04-05 01:47:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:37.828233 | orchestrator | 2026-04-05 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:40.877595 | orchestrator | 2026-04-05 01:47:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:40.878849 | orchestrator | 2026-04-05 01:47:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:40.878939 | orchestrator | 2026-04-05 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:43.935456 | orchestrator | 2026-04-05 01:47:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:43.937444 | orchestrator | 2026-04-05 01:47:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:43.937540 | orchestrator | 2026-04-05 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:46.981361 | orchestrator | 2026-04-05 01:47:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:46.984677 | orchestrator | 2026-04-05 01:47:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:46.984752 | orchestrator | 2026-04-05 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:50.037023 | orchestrator | 2026-04-05 01:47:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:50.039582 | orchestrator | 2026-04-05 01:47:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:50.039670 | orchestrator | 2026-04-05 01:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:53.078510 | orchestrator | 2026-04-05 01:47:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:53.078810 | orchestrator | 2026-04-05 01:47:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:53.078839 | orchestrator | 2026-04-05 01:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:56.123702 | orchestrator | 2026-04-05 01:47:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:56.125954 | orchestrator | 2026-04-05 01:47:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:56.126072 | orchestrator | 2026-04-05 01:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:47:59.170445 | orchestrator | 2026-04-05 01:47:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:47:59.172816 | orchestrator | 2026-04-05 01:47:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:47:59.172897 | orchestrator | 2026-04-05 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:02.213062 | orchestrator | 2026-04-05 01:48:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:02.215417 | orchestrator | 2026-04-05 01:48:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:02.215459 | orchestrator | 2026-04-05 01:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:05.259342 | orchestrator | 2026-04-05 01:48:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:05.260556 | orchestrator | 2026-04-05 01:48:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:05.260592 | orchestrator | 2026-04-05 01:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:08.301242 | orchestrator | 2026-04-05 01:48:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:08.301950 | orchestrator | 2026-04-05 01:48:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:08.302368 | orchestrator | 2026-04-05 01:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:11.338229 | orchestrator | 2026-04-05 01:48:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:11.341461 | orchestrator | 2026-04-05 01:48:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:11.341551 | orchestrator | 2026-04-05 01:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:14.387730 | orchestrator | 2026-04-05 01:48:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:14.389198 | orchestrator | 2026-04-05 01:48:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:14.389248 | orchestrator | 2026-04-05 01:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:17.442219 | orchestrator | 2026-04-05 01:48:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:17.444094 | orchestrator | 2026-04-05 01:48:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:17.444131 | orchestrator | 2026-04-05 01:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:20.486958 | orchestrator | 2026-04-05 01:48:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:20.488787 | orchestrator | 2026-04-05 01:48:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:20.488842 | orchestrator | 2026-04-05 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:23.537498 | orchestrator | 2026-04-05 01:48:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:23.540471 | orchestrator | 2026-04-05 01:48:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:23.540539 | orchestrator | 2026-04-05 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:26.588628 | orchestrator | 2026-04-05 01:48:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:26.590388 | orchestrator | 2026-04-05 01:48:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:26.590468 | orchestrator | 2026-04-05 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:29.643928 | orchestrator | 2026-04-05 01:48:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:29.871832 | orchestrator | 2026-04-05 01:48:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:29.871943 | orchestrator | 2026-04-05 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:32.700012 | orchestrator | 2026-04-05 01:48:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:32.701322 | orchestrator | 2026-04-05 01:48:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:32.701782 | orchestrator | 2026-04-05 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:35.751839 | orchestrator | 2026-04-05 01:48:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:35.753827 | orchestrator | 2026-04-05 01:48:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:35.754193 | orchestrator | 2026-04-05 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:38.804022 | orchestrator | 2026-04-05 01:48:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:38.805570 | orchestrator | 2026-04-05 01:48:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:38.805648 | orchestrator | 2026-04-05 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:41.854273 | orchestrator | 2026-04-05 01:48:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:41.856215 | orchestrator | 2026-04-05 01:48:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:41.856292 | orchestrator | 2026-04-05 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:44.909012 | orchestrator | 2026-04-05 01:48:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:44.910531 | orchestrator | 2026-04-05 01:48:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:44.910880 | orchestrator | 2026-04-05 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:47.963929 | orchestrator | 2026-04-05 01:48:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:47.965073 | orchestrator | 2026-04-05 01:48:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:47.965225 | orchestrator | 2026-04-05 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:51.015053 | orchestrator | 2026-04-05 01:48:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:51.016125 | orchestrator | 2026-04-05 01:48:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:51.016342 | orchestrator | 2026-04-05 01:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:54.077440 | orchestrator | 2026-04-05 01:48:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:54.080101 | orchestrator | 2026-04-05 01:48:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:54.080179 | orchestrator | 2026-04-05 01:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:48:57.130678 | orchestrator | 2026-04-05 01:48:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:48:57.132781 | orchestrator | 2026-04-05 01:48:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:48:57.132991 | orchestrator | 2026-04-05 01:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:00.182373 | orchestrator | 2026-04-05 01:49:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:00.184112 | orchestrator | 2026-04-05 01:49:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:00.184392 | orchestrator | 2026-04-05 01:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:03.225663 | orchestrator | 2026-04-05 01:49:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:03.227209 | orchestrator | 2026-04-05 01:49:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:03.227267 | orchestrator | 2026-04-05 01:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:06.279053 | orchestrator | 2026-04-05 01:49:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:06.280245 | orchestrator | 2026-04-05 01:49:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:06.280297 | orchestrator | 2026-04-05 01:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:09.331451 | orchestrator | 2026-04-05 01:49:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:09.334889 | orchestrator | 2026-04-05 01:49:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:09.335086 | orchestrator | 2026-04-05 01:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:12.387986 | orchestrator | 2026-04-05 01:49:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:12.389364 | orchestrator | 2026-04-05 01:49:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:12.389400 | orchestrator | 2026-04-05 01:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:15.440647 | orchestrator | 2026-04-05 01:49:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:15.443114 | orchestrator | 2026-04-05 01:49:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:15.443152 | orchestrator | 2026-04-05 01:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:18.491796 | orchestrator | 2026-04-05 01:49:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:18.494726 | orchestrator | 2026-04-05 01:49:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:18.494776 | orchestrator | 2026-04-05 01:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:21.538618 | orchestrator | 2026-04-05 01:49:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:21.540173 | orchestrator | 2026-04-05 01:49:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:21.540223 | orchestrator | 2026-04-05 01:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:24.588430 | orchestrator | 2026-04-05 01:49:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:24.589646 | orchestrator | 2026-04-05 01:49:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:24.589679 | orchestrator | 2026-04-05 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:27.635469 | orchestrator | 2026-04-05 01:49:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:27.636515 | orchestrator | 2026-04-05 01:49:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:27.636588 | orchestrator | 2026-04-05 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:30.681144 | orchestrator | 2026-04-05 01:49:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:30.682097 | orchestrator | 2026-04-05 01:49:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:30.682448 | orchestrator | 2026-04-05 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:33.728068 | orchestrator | 2026-04-05 01:49:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:33.730300 | orchestrator | 2026-04-05 01:49:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:33.730475 | orchestrator | 2026-04-05 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:36.778353 | orchestrator | 2026-04-05 01:49:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:36.780010 | orchestrator | 2026-04-05 01:49:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:36.780347 | orchestrator | 2026-04-05 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:39.823944 | orchestrator | 2026-04-05 01:49:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:39.826443 | orchestrator | 2026-04-05 01:49:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:39.826602 | orchestrator | 2026-04-05 01:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:42.875196 | orchestrator | 2026-04-05 01:49:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:42.876719 | orchestrator | 2026-04-05 01:49:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:42.876865 | orchestrator | 2026-04-05 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:45.929443 | orchestrator | 2026-04-05 01:49:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:45.932410 | orchestrator | 2026-04-05 01:49:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:45.932481 | orchestrator | 2026-04-05 01:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:48.986906 | orchestrator | 2026-04-05 01:49:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:48.988706 | orchestrator | 2026-04-05 01:49:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:48.988756 | orchestrator | 2026-04-05 01:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:52.037725 | orchestrator | 2026-04-05 01:49:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:52.039873 | orchestrator | 2026-04-05 01:49:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:52.039941 | orchestrator | 2026-04-05 01:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:55.088771 | orchestrator | 2026-04-05 01:49:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:55.090905 | orchestrator | 2026-04-05 01:49:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:55.090991 | orchestrator | 2026-04-05 01:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:49:58.136496 | orchestrator | 2026-04-05 01:49:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:49:58.138584 | orchestrator | 2026-04-05 01:49:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:49:58.138706 | orchestrator | 2026-04-05 01:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:01.185270 | orchestrator | 2026-04-05 01:50:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:01.188382 | orchestrator | 2026-04-05 01:50:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:01.188424 | orchestrator | 2026-04-05 01:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:04.240319 | orchestrator | 2026-04-05 01:50:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:04.241733 | orchestrator | 2026-04-05 01:50:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:04.241755 | orchestrator | 2026-04-05 01:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:07.286208 | orchestrator | 2026-04-05 01:50:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:07.287719 | orchestrator | 2026-04-05 01:50:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:07.287788 | orchestrator | 2026-04-05 01:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:10.334668 | orchestrator | 2026-04-05 01:50:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:10.336732 | orchestrator | 2026-04-05 01:50:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:10.336786 | orchestrator | 2026-04-05 01:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:13.382713 | orchestrator | 2026-04-05 01:50:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:13.383603 | orchestrator | 2026-04-05 01:50:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:13.384185 | orchestrator | 2026-04-05 01:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:16.432683 | orchestrator | 2026-04-05 01:50:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:16.434254 | orchestrator | 2026-04-05 01:50:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:16.434317 | orchestrator | 2026-04-05 01:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:19.486729 | orchestrator | 2026-04-05 01:50:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:19.488628 | orchestrator | 2026-04-05 01:50:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:19.488674 | orchestrator | 2026-04-05 01:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:22.536286 | orchestrator | 2026-04-05 01:50:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:22.536865 | orchestrator | 2026-04-05 01:50:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:22.536903 | orchestrator | 2026-04-05 01:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:25.580696 | orchestrator | 2026-04-05 01:50:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:25.582303 | orchestrator | 2026-04-05 01:50:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:25.582334 | orchestrator | 2026-04-05 01:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:28.625124 | orchestrator | 2026-04-05 01:50:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:28.626831 | orchestrator | 2026-04-05 01:50:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:28.626864 | orchestrator | 2026-04-05 01:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:31.674890 | orchestrator | 2026-04-05 01:50:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:31.676015 | orchestrator | 2026-04-05 01:50:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:31.676062 | orchestrator | 2026-04-05 01:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:34.718457 | orchestrator | 2026-04-05 01:50:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:34.719957 | orchestrator | 2026-04-05 01:50:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:34.720014 | orchestrator | 2026-04-05 01:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:37.769885 | orchestrator | 2026-04-05 01:50:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:37.771313 | orchestrator | 2026-04-05 01:50:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:37.771408 | orchestrator | 2026-04-05 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:40.819361 | orchestrator | 2026-04-05 01:50:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:40.819694 | orchestrator | 2026-04-05 01:50:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:40.819719 | orchestrator | 2026-04-05 01:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:43.871101 | orchestrator | 2026-04-05 01:50:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:43.872505 | orchestrator | 2026-04-05 01:50:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:43.872579 | orchestrator | 2026-04-05 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:46.925931 | orchestrator | 2026-04-05 01:50:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:46.927572 | orchestrator | 2026-04-05 01:50:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:46.927612 | orchestrator | 2026-04-05 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:49.982286 | orchestrator | 2026-04-05 01:50:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:49.984295 | orchestrator | 2026-04-05 01:50:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:49.984340 | orchestrator | 2026-04-05 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:53.031305 | orchestrator | 2026-04-05 01:50:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:53.031559 | orchestrator | 2026-04-05 01:50:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:53.031585 | orchestrator | 2026-04-05 01:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:56.075689 | orchestrator | 2026-04-05 01:50:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:56.077984 | orchestrator | 2026-04-05 01:50:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:56.078084 | orchestrator | 2026-04-05 01:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:50:59.127703 | orchestrator | 2026-04-05 01:50:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:50:59.129915 | orchestrator | 2026-04-05 01:50:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:50:59.129971 | orchestrator | 2026-04-05 01:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:02.181471 | orchestrator | 2026-04-05 01:51:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:02.182688 | orchestrator | 2026-04-05 01:51:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:02.182765 | orchestrator | 2026-04-05 01:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:05.233670 | orchestrator | 2026-04-05 01:51:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:05.236419 | orchestrator | 2026-04-05 01:51:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:05.236469 | orchestrator | 2026-04-05 01:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:08.289331 | orchestrator | 2026-04-05 01:51:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:08.292234 | orchestrator | 2026-04-05 01:51:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:08.292290 | orchestrator | 2026-04-05 01:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:11.340313 | orchestrator | 2026-04-05 01:51:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:11.344282 | orchestrator | 2026-04-05 01:51:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:11.344358 | orchestrator | 2026-04-05 01:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:14.393319 | orchestrator | 2026-04-05 01:51:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:14.395708 | orchestrator | 2026-04-05 01:51:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:14.395769 | orchestrator | 2026-04-05 01:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:17.447854 | orchestrator | 2026-04-05 01:51:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:17.449200 | orchestrator | 2026-04-05 01:51:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:17.449273 | orchestrator | 2026-04-05 01:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:20.491285 | orchestrator | 2026-04-05 01:51:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:20.494490 | orchestrator | 2026-04-05 01:51:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:20.494567 | orchestrator | 2026-04-05 01:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:23.551180 | orchestrator | 2026-04-05 01:51:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:23.553515 | orchestrator | 2026-04-05 01:51:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:23.553691 | orchestrator | 2026-04-05 01:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:26.596744 | orchestrator | 2026-04-05 01:51:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:26.597970 | orchestrator | 2026-04-05 01:51:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:26.598061 | orchestrator | 2026-04-05 01:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:29.650383 | orchestrator | 2026-04-05 01:51:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:29.651939 | orchestrator | 2026-04-05 01:51:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:29.652001 | orchestrator | 2026-04-05 01:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:32.696339 | orchestrator | 2026-04-05 01:51:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:32.698887 | orchestrator | 2026-04-05 01:51:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:32.698992 | orchestrator | 2026-04-05 01:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:35.744903 | orchestrator | 2026-04-05 01:51:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:35.746506 | orchestrator | 2026-04-05 01:51:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:35.746580 | orchestrator | 2026-04-05 01:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:38.793645 | orchestrator | 2026-04-05 01:51:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:38.795215 | orchestrator | 2026-04-05 01:51:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:38.795248 | orchestrator | 2026-04-05 01:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:41.844619 | orchestrator | 2026-04-05 01:51:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:41.846263 | orchestrator | 2026-04-05 01:51:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:41.846330 | orchestrator | 2026-04-05 01:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:44.901998 | orchestrator | 2026-04-05 01:51:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:44.903551 | orchestrator | 2026-04-05 01:51:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:44.903614 | orchestrator | 2026-04-05 01:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:47.954734 | orchestrator | 2026-04-05 01:51:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:47.958393 | orchestrator | 2026-04-05 01:51:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:47.958452 | orchestrator | 2026-04-05 01:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:51.017976 | orchestrator | 2026-04-05 01:51:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:51.018357 | orchestrator | 2026-04-05 01:51:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:51.018697 | orchestrator | 2026-04-05 01:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:54.075497 | orchestrator | 2026-04-05 01:51:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:54.077217 | orchestrator | 2026-04-05 01:51:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:54.077263 | orchestrator | 2026-04-05 01:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:51:57.135570 | orchestrator | 2026-04-05 01:51:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:51:57.138247 | orchestrator | 2026-04-05 01:51:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:51:57.138298 | orchestrator | 2026-04-05 01:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:00.195861 | orchestrator | 2026-04-05 01:52:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:00.196198 | orchestrator | 2026-04-05 01:52:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:00.196230 | orchestrator | 2026-04-05 01:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:03.244965 | orchestrator | 2026-04-05 01:52:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:03.247086 | orchestrator | 2026-04-05 01:52:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:03.247227 | orchestrator | 2026-04-05 01:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:06.297068 | orchestrator | 2026-04-05 01:52:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:06.299200 | orchestrator | 2026-04-05 01:52:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:06.299254 | orchestrator | 2026-04-05 01:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:09.352354 | orchestrator | 2026-04-05 01:52:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:09.355902 | orchestrator | 2026-04-05 01:52:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:09.355982 | orchestrator | 2026-04-05 01:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:12.398004 | orchestrator | 2026-04-05 01:52:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:12.398803 | orchestrator | 2026-04-05 01:52:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:12.398835 | orchestrator | 2026-04-05 01:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:15.444301 | orchestrator | 2026-04-05 01:52:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:15.446631 | orchestrator | 2026-04-05 01:52:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:15.446710 | orchestrator | 2026-04-05 01:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:18.496624 | orchestrator | 2026-04-05 01:52:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:18.498085 | orchestrator | 2026-04-05 01:52:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:18.498381 | orchestrator | 2026-04-05 01:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:21.549563 | orchestrator | 2026-04-05 01:52:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:21.554130 | orchestrator | 2026-04-05 01:52:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:21.554243 | orchestrator | 2026-04-05 01:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:24.602226 | orchestrator | 2026-04-05 01:52:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:24.602948 | orchestrator | 2026-04-05 01:52:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:24.603096 | orchestrator | 2026-04-05 01:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:27.654329 | orchestrator | 2026-04-05 01:52:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:27.658870 | orchestrator | 2026-04-05 01:52:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:27.658941 | orchestrator | 2026-04-05 01:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:30.710260 | orchestrator | 2026-04-05 01:52:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:30.712036 | orchestrator | 2026-04-05 01:52:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:30.712643 | orchestrator | 2026-04-05 01:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:33.765969 | orchestrator | 2026-04-05 01:52:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:33.767826 | orchestrator | 2026-04-05 01:52:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:33.767907 | orchestrator | 2026-04-05 01:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:36.822248 | orchestrator | 2026-04-05 01:52:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:36.822529 | orchestrator | 2026-04-05 01:52:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:36.822555 | orchestrator | 2026-04-05 01:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:39.871136 | orchestrator | 2026-04-05 01:52:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:39.872744 | orchestrator | 2026-04-05 01:52:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:39.872794 | orchestrator | 2026-04-05 01:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:42.921903 | orchestrator | 2026-04-05 01:52:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:42.923662 | orchestrator | 2026-04-05 01:52:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:42.923766 | orchestrator | 2026-04-05 01:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:45.970132 | orchestrator | 2026-04-05 01:52:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:45.971667 | orchestrator | 2026-04-05 01:52:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:45.971701 | orchestrator | 2026-04-05 01:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:49.018618 | orchestrator | 2026-04-05 01:52:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:49.020770 | orchestrator | 2026-04-05 01:52:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:49.020795 | orchestrator | 2026-04-05 01:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:52.065509 | orchestrator | 2026-04-05 01:52:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:52.067491 | orchestrator | 2026-04-05 01:52:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:52.067676 | orchestrator | 2026-04-05 01:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:55.121234 | orchestrator | 2026-04-05 01:52:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:55.122615 | orchestrator | 2026-04-05 01:52:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:55.122662 | orchestrator | 2026-04-05 01:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:52:58.180369 | orchestrator | 2026-04-05 01:52:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:52:58.182070 | orchestrator | 2026-04-05 01:52:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:52:58.182124 | orchestrator | 2026-04-05 01:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:53:01.228095 | orchestrator | 2026-04-05 01:53:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:53:01.230514 | orchestrator | 2026-04-05 01:53:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:53:01.230599 | orchestrator | 2026-04-05 01:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:04.381189 | orchestrator | 2026-04-05 01:55:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:04.381299 | orchestrator | 2026-04-05 01:55:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:04.381315 | orchestrator | 2026-04-05 01:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:07.424709 | orchestrator | 2026-04-05 01:55:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:07.425213 | orchestrator | 2026-04-05 01:55:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:07.425241 | orchestrator | 2026-04-05 01:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:10.471321 | orchestrator | 2026-04-05 01:55:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:10.472431 | orchestrator | 2026-04-05 01:55:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:10.472606 | orchestrator | 2026-04-05 01:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:13.518402 | orchestrator | 2026-04-05 01:55:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:13.520171 | orchestrator | 2026-04-05 01:55:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:13.520244 | orchestrator | 2026-04-05 01:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:16.569913 | orchestrator | 2026-04-05 01:55:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:16.571946 | orchestrator | 2026-04-05 01:55:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:16.571978 | orchestrator | 2026-04-05 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:19.622896 | orchestrator | 2026-04-05 01:55:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:19.624035 | orchestrator | 2026-04-05 01:55:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:19.624324 | orchestrator | 2026-04-05 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:22.661190 | orchestrator | 2026-04-05 01:55:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:22.662862 | orchestrator | 2026-04-05 01:55:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:22.662895 | orchestrator | 2026-04-05 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:25.709291 | orchestrator | 2026-04-05 01:55:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:25.711139 | orchestrator | 2026-04-05 01:55:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:25.711265 | orchestrator | 2026-04-05 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:28.755569 | orchestrator | 2026-04-05 01:55:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:28.757325 | orchestrator | 2026-04-05 01:55:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:28.757367 | orchestrator | 2026-04-05 01:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:31.803067 | orchestrator | 2026-04-05 01:55:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:31.804895 | orchestrator | 2026-04-05 01:55:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:31.804941 | orchestrator | 2026-04-05 01:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:34.848210 | orchestrator | 2026-04-05 01:55:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:34.850621 | orchestrator | 2026-04-05 01:55:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:34.850780 | orchestrator | 2026-04-05 01:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:37.892030 | orchestrator | 2026-04-05 01:55:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:37.892928 | orchestrator | 2026-04-05 01:55:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:37.892960 | orchestrator | 2026-04-05 01:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:40.937093 | orchestrator | 2026-04-05 01:55:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:40.939949 | orchestrator | 2026-04-05 01:55:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:40.939998 | orchestrator | 2026-04-05 01:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:43.992761 | orchestrator | 2026-04-05 01:55:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:43.995522 | orchestrator | 2026-04-05 01:55:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:43.995593 | orchestrator | 2026-04-05 01:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:47.048434 | orchestrator | 2026-04-05 01:55:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:47.050365 | orchestrator | 2026-04-05 01:55:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:47.050606 | orchestrator | 2026-04-05 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:50.099257 | orchestrator | 2026-04-05 01:55:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:50.104964 | orchestrator | 2026-04-05 01:55:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:50.105067 | orchestrator | 2026-04-05 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:53.146635 | orchestrator | 2026-04-05 01:55:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:53.147515 | orchestrator | 2026-04-05 01:55:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:53.147854 | orchestrator | 2026-04-05 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:56.196648 | orchestrator | 2026-04-05 01:55:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:56.198599 | orchestrator | 2026-04-05 01:55:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:56.198702 | orchestrator | 2026-04-05 01:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:55:59.245000 | orchestrator | 2026-04-05 01:55:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:55:59.247149 | orchestrator | 2026-04-05 01:55:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:55:59.247190 | orchestrator | 2026-04-05 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:02.297951 | orchestrator | 2026-04-05 01:56:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:02.300703 | orchestrator | 2026-04-05 01:56:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:02.300755 | orchestrator | 2026-04-05 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:05.342516 | orchestrator | 2026-04-05 01:56:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:05.343838 | orchestrator | 2026-04-05 01:56:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:05.344323 | orchestrator | 2026-04-05 01:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:08.387571 | orchestrator | 2026-04-05 01:56:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:08.388851 | orchestrator | 2026-04-05 01:56:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:08.389275 | orchestrator | 2026-04-05 01:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:11.439153 | orchestrator | 2026-04-05 01:56:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:11.440641 | orchestrator | 2026-04-05 01:56:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:11.440703 | orchestrator | 2026-04-05 01:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:14.489274 | orchestrator | 2026-04-05 01:56:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:14.491823 | orchestrator | 2026-04-05 01:56:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:14.491945 | orchestrator | 2026-04-05 01:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:17.547653 | orchestrator | 2026-04-05 01:56:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:17.550352 | orchestrator | 2026-04-05 01:56:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:17.550400 | orchestrator | 2026-04-05 01:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:20.594992 | orchestrator | 2026-04-05 01:56:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:20.596621 | orchestrator | 2026-04-05 01:56:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:20.596729 | orchestrator | 2026-04-05 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:23.637398 | orchestrator | 2026-04-05 01:56:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:23.638951 | orchestrator | 2026-04-05 01:56:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:23.639023 | orchestrator | 2026-04-05 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:26.686940 | orchestrator | 2026-04-05 01:56:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:26.689467 | orchestrator | 2026-04-05 01:56:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:26.689534 | orchestrator | 2026-04-05 01:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:29.734710 | orchestrator | 2026-04-05 01:56:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:29.736423 | orchestrator | 2026-04-05 01:56:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:29.736436 | orchestrator | 2026-04-05 01:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:32.782396 | orchestrator | 2026-04-05 01:56:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:32.784810 | orchestrator | 2026-04-05 01:56:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:32.785030 | orchestrator | 2026-04-05 01:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:35.833912 | orchestrator | 2026-04-05 01:56:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:35.836638 | orchestrator | 2026-04-05 01:56:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:35.836690 | orchestrator | 2026-04-05 01:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:38.885360 | orchestrator | 2026-04-05 01:56:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:38.888495 | orchestrator | 2026-04-05 01:56:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:38.888571 | orchestrator | 2026-04-05 01:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:41.937190 | orchestrator | 2026-04-05 01:56:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:41.939206 | orchestrator | 2026-04-05 01:56:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:41.939325 | orchestrator | 2026-04-05 01:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:44.983923 | orchestrator | 2026-04-05 01:56:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:44.986902 | orchestrator | 2026-04-05 01:56:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:44.986982 | orchestrator | 2026-04-05 01:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:48.037614 | orchestrator | 2026-04-05 01:56:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:48.040117 | orchestrator | 2026-04-05 01:56:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:48.040173 | orchestrator | 2026-04-05 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:51.081345 | orchestrator | 2026-04-05 01:56:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:51.082753 | orchestrator | 2026-04-05 01:56:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:51.082796 | orchestrator | 2026-04-05 01:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:54.127629 | orchestrator | 2026-04-05 01:56:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:54.130187 | orchestrator | 2026-04-05 01:56:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:54.130265 | orchestrator | 2026-04-05 01:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:56:57.183799 | orchestrator | 2026-04-05 01:56:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:56:57.185692 | orchestrator | 2026-04-05 01:56:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:56:57.185755 | orchestrator | 2026-04-05 01:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:00.236862 | orchestrator | 2026-04-05 01:57:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:00.239607 | orchestrator | 2026-04-05 01:57:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:00.239736 | orchestrator | 2026-04-05 01:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:03.282403 | orchestrator | 2026-04-05 01:57:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:03.284812 | orchestrator | 2026-04-05 01:57:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:03.284879 | orchestrator | 2026-04-05 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:06.333561 | orchestrator | 2026-04-05 01:57:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:06.334580 | orchestrator | 2026-04-05 01:57:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:06.334612 | orchestrator | 2026-04-05 01:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:09.381339 | orchestrator | 2026-04-05 01:57:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:09.383819 | orchestrator | 2026-04-05 01:57:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:09.383886 | orchestrator | 2026-04-05 01:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:12.422735 | orchestrator | 2026-04-05 01:57:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:12.422915 | orchestrator | 2026-04-05 01:57:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:12.422937 | orchestrator | 2026-04-05 01:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:15.474930 | orchestrator | 2026-04-05 01:57:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:15.476693 | orchestrator | 2026-04-05 01:57:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:15.476925 | orchestrator | 2026-04-05 01:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:18.524157 | orchestrator | 2026-04-05 01:57:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:18.525941 | orchestrator | 2026-04-05 01:57:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:18.525989 | orchestrator | 2026-04-05 01:57:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:21.571065 | orchestrator | 2026-04-05 01:57:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:21.572466 | orchestrator | 2026-04-05 01:57:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:21.572517 | orchestrator | 2026-04-05 01:57:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:24.613005 | orchestrator | 2026-04-05 01:57:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:24.614542 | orchestrator | 2026-04-05 01:57:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:24.614571 | orchestrator | 2026-04-05 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:27.661875 | orchestrator | 2026-04-05 01:57:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:27.663292 | orchestrator | 2026-04-05 01:57:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:27.663357 | orchestrator | 2026-04-05 01:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:30.710169 | orchestrator | 2026-04-05 01:57:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:30.713541 | orchestrator | 2026-04-05 01:57:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:30.713616 | orchestrator | 2026-04-05 01:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:33.764790 | orchestrator | 2026-04-05 01:57:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:33.767982 | orchestrator | 2026-04-05 01:57:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:33.768032 | orchestrator | 2026-04-05 01:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:36.812401 | orchestrator | 2026-04-05 01:57:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:36.814451 | orchestrator | 2026-04-05 01:57:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:36.814622 | orchestrator | 2026-04-05 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:39.857575 | orchestrator | 2026-04-05 01:57:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:39.860374 | orchestrator | 2026-04-05 01:57:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:39.860494 | orchestrator | 2026-04-05 01:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:42.909191 | orchestrator | 2026-04-05 01:57:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:42.911009 | orchestrator | 2026-04-05 01:57:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:42.911093 | orchestrator | 2026-04-05 01:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:45.957606 | orchestrator | 2026-04-05 01:57:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:45.960075 | orchestrator | 2026-04-05 01:57:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:45.960141 | orchestrator | 2026-04-05 01:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:49.002870 | orchestrator | 2026-04-05 01:57:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:49.005532 | orchestrator | 2026-04-05 01:57:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:49.005773 | orchestrator | 2026-04-05 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:52.046321 | orchestrator | 2026-04-05 01:57:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:52.048641 | orchestrator | 2026-04-05 01:57:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:52.048798 | orchestrator | 2026-04-05 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:55.093874 | orchestrator | 2026-04-05 01:57:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:55.096347 | orchestrator | 2026-04-05 01:57:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:55.096402 | orchestrator | 2026-04-05 01:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:57:58.149754 | orchestrator | 2026-04-05 01:57:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:57:58.152962 | orchestrator | 2026-04-05 01:57:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:57:58.153028 | orchestrator | 2026-04-05 01:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:01.194759 | orchestrator | 2026-04-05 01:58:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:01.196638 | orchestrator | 2026-04-05 01:58:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:01.196703 | orchestrator | 2026-04-05 01:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:04.239303 | orchestrator | 2026-04-05 01:58:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:04.243169 | orchestrator | 2026-04-05 01:58:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:04.243324 | orchestrator | 2026-04-05 01:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:07.292865 | orchestrator | 2026-04-05 01:58:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:07.296120 | orchestrator | 2026-04-05 01:58:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:07.296187 | orchestrator | 2026-04-05 01:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:10.346934 | orchestrator | 2026-04-05 01:58:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:10.348987 | orchestrator | 2026-04-05 01:58:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:10.349388 | orchestrator | 2026-04-05 01:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:13.398773 | orchestrator | 2026-04-05 01:58:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:13.400078 | orchestrator | 2026-04-05 01:58:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:13.400160 | orchestrator | 2026-04-05 01:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:16.443473 | orchestrator | 2026-04-05 01:58:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:16.446629 | orchestrator | 2026-04-05 01:58:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:16.446809 | orchestrator | 2026-04-05 01:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:19.497294 | orchestrator | 2026-04-05 01:58:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:19.501346 | orchestrator | 2026-04-05 01:58:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:19.501444 | orchestrator | 2026-04-05 01:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:22.543964 | orchestrator | 2026-04-05 01:58:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:22.545779 | orchestrator | 2026-04-05 01:58:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:22.545855 | orchestrator | 2026-04-05 01:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:25.590201 | orchestrator | 2026-04-05 01:58:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:25.593451 | orchestrator | 2026-04-05 01:58:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:25.593538 | orchestrator | 2026-04-05 01:58:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:28.643842 | orchestrator | 2026-04-05 01:58:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:28.644952 | orchestrator | 2026-04-05 01:58:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:28.644999 | orchestrator | 2026-04-05 01:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:31.686485 | orchestrator | 2026-04-05 01:58:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:31.687797 | orchestrator | 2026-04-05 01:58:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:31.687862 | orchestrator | 2026-04-05 01:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:34.732632 | orchestrator | 2026-04-05 01:58:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:34.735095 | orchestrator | 2026-04-05 01:58:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:34.735149 | orchestrator | 2026-04-05 01:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:37.773026 | orchestrator | 2026-04-05 01:58:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:37.774103 | orchestrator | 2026-04-05 01:58:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:37.774140 | orchestrator | 2026-04-05 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:40.823163 | orchestrator | 2026-04-05 01:58:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:40.825501 | orchestrator | 2026-04-05 01:58:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:40.825584 | orchestrator | 2026-04-05 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:43.872568 | orchestrator | 2026-04-05 01:58:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:43.874307 | orchestrator | 2026-04-05 01:58:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:43.874370 | orchestrator | 2026-04-05 01:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:46.919713 | orchestrator | 2026-04-05 01:58:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:46.921538 | orchestrator | 2026-04-05 01:58:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:46.921601 | orchestrator | 2026-04-05 01:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:49.969936 | orchestrator | 2026-04-05 01:58:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:49.971299 | orchestrator | 2026-04-05 01:58:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:49.971438 | orchestrator | 2026-04-05 01:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:53.014839 | orchestrator | 2026-04-05 01:58:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:53.016910 | orchestrator | 2026-04-05 01:58:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:53.017054 | orchestrator | 2026-04-05 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:56.066605 | orchestrator | 2026-04-05 01:58:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:56.068747 | orchestrator | 2026-04-05 01:58:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:56.068800 | orchestrator | 2026-04-05 01:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:58:59.112856 | orchestrator | 2026-04-05 01:58:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:58:59.115359 | orchestrator | 2026-04-05 01:58:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:58:59.115514 | orchestrator | 2026-04-05 01:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:02.164632 | orchestrator | 2026-04-05 01:59:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:02.167351 | orchestrator | 2026-04-05 01:59:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:02.167419 | orchestrator | 2026-04-05 01:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:05.209894 | orchestrator | 2026-04-05 01:59:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:05.212355 | orchestrator | 2026-04-05 01:59:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:05.212543 | orchestrator | 2026-04-05 01:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:08.251993 | orchestrator | 2026-04-05 01:59:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:08.254844 | orchestrator | 2026-04-05 01:59:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:08.254988 | orchestrator | 2026-04-05 01:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:11.299123 | orchestrator | 2026-04-05 01:59:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:11.300751 | orchestrator | 2026-04-05 01:59:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:11.300794 | orchestrator | 2026-04-05 01:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:14.342339 | orchestrator | 2026-04-05 01:59:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:14.345162 | orchestrator | 2026-04-05 01:59:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:14.345219 | orchestrator | 2026-04-05 01:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:17.389279 | orchestrator | 2026-04-05 01:59:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:17.389849 | orchestrator | 2026-04-05 01:59:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:17.389891 | orchestrator | 2026-04-05 01:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:20.430228 | orchestrator | 2026-04-05 01:59:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:20.430476 | orchestrator | 2026-04-05 01:59:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:20.430505 | orchestrator | 2026-04-05 01:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:23.488847 | orchestrator | 2026-04-05 01:59:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:23.489069 | orchestrator | 2026-04-05 01:59:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:23.489094 | orchestrator | 2026-04-05 01:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:26.549451 | orchestrator | 2026-04-05 01:59:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:26.552058 | orchestrator | 2026-04-05 01:59:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:26.552126 | orchestrator | 2026-04-05 01:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:29.594078 | orchestrator | 2026-04-05 01:59:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:29.594354 | orchestrator | 2026-04-05 01:59:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:29.594374 | orchestrator | 2026-04-05 01:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:32.624799 | orchestrator | 2026-04-05 01:59:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:32.625619 | orchestrator | 2026-04-05 01:59:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:32.625657 | orchestrator | 2026-04-05 01:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:35.673337 | orchestrator | 2026-04-05 01:59:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:35.674280 | orchestrator | 2026-04-05 01:59:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:35.674336 | orchestrator | 2026-04-05 01:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:38.722187 | orchestrator | 2026-04-05 01:59:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:38.723385 | orchestrator | 2026-04-05 01:59:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:38.723453 | orchestrator | 2026-04-05 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:41.768793 | orchestrator | 2026-04-05 01:59:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:41.770430 | orchestrator | 2026-04-05 01:59:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:41.770489 | orchestrator | 2026-04-05 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:44.823076 | orchestrator | 2026-04-05 01:59:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:44.825101 | orchestrator | 2026-04-05 01:59:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:44.825152 | orchestrator | 2026-04-05 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:47.877020 | orchestrator | 2026-04-05 01:59:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:47.879441 | orchestrator | 2026-04-05 01:59:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:47.879495 | orchestrator | 2026-04-05 01:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:50.931716 | orchestrator | 2026-04-05 01:59:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:50.933490 | orchestrator | 2026-04-05 01:59:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:50.933535 | orchestrator | 2026-04-05 01:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:53.979454 | orchestrator | 2026-04-05 01:59:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:53.981401 | orchestrator | 2026-04-05 01:59:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:53.981452 | orchestrator | 2026-04-05 01:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:59:57.033476 | orchestrator | 2026-04-05 01:59:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 01:59:57.033927 | orchestrator | 2026-04-05 01:59:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 01:59:57.033976 | orchestrator | 2026-04-05 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:00.095774 | orchestrator | 2026-04-05 02:00:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:00.097773 | orchestrator | 2026-04-05 02:00:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:00.097804 | orchestrator | 2026-04-05 02:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:03.145917 | orchestrator | 2026-04-05 02:00:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:03.147625 | orchestrator | 2026-04-05 02:00:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:03.147664 | orchestrator | 2026-04-05 02:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:06.201059 | orchestrator | 2026-04-05 02:00:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:06.203698 | orchestrator | 2026-04-05 02:00:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:06.203763 | orchestrator | 2026-04-05 02:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:09.249713 | orchestrator | 2026-04-05 02:00:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:09.250608 | orchestrator | 2026-04-05 02:00:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:09.250658 | orchestrator | 2026-04-05 02:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:12.295325 | orchestrator | 2026-04-05 02:00:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:12.296621 | orchestrator | 2026-04-05 02:00:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:12.296777 | orchestrator | 2026-04-05 02:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:15.343304 | orchestrator | 2026-04-05 02:00:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:15.344584 | orchestrator | 2026-04-05 02:00:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:15.344661 | orchestrator | 2026-04-05 02:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:18.388375 | orchestrator | 2026-04-05 02:00:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:18.389455 | orchestrator | 2026-04-05 02:00:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:18.389503 | orchestrator | 2026-04-05 02:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:21.437250 | orchestrator | 2026-04-05 02:00:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:21.437505 | orchestrator | 2026-04-05 02:00:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:21.438345 | orchestrator | 2026-04-05 02:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:24.487934 | orchestrator | 2026-04-05 02:00:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:24.490845 | orchestrator | 2026-04-05 02:00:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:24.490890 | orchestrator | 2026-04-05 02:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:27.546320 | orchestrator | 2026-04-05 02:00:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:27.548374 | orchestrator | 2026-04-05 02:00:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:27.549190 | orchestrator | 2026-04-05 02:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:30.595343 | orchestrator | 2026-04-05 02:00:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:30.597733 | orchestrator | 2026-04-05 02:00:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:30.597811 | orchestrator | 2026-04-05 02:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:33.648324 | orchestrator | 2026-04-05 02:00:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:33.651221 | orchestrator | 2026-04-05 02:00:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:33.651304 | orchestrator | 2026-04-05 02:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:36.694259 | orchestrator | 2026-04-05 02:00:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:36.697291 | orchestrator | 2026-04-05 02:00:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:36.697340 | orchestrator | 2026-04-05 02:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:39.753489 | orchestrator | 2026-04-05 02:00:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:39.755299 | orchestrator | 2026-04-05 02:00:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:39.755356 | orchestrator | 2026-04-05 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:42.807763 | orchestrator | 2026-04-05 02:00:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:42.809621 | orchestrator | 2026-04-05 02:00:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:42.809751 | orchestrator | 2026-04-05 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:45.853859 | orchestrator | 2026-04-05 02:00:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:45.856817 | orchestrator | 2026-04-05 02:00:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:45.856874 | orchestrator | 2026-04-05 02:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:48.904660 | orchestrator | 2026-04-05 02:00:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:48.906945 | orchestrator | 2026-04-05 02:00:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:48.907005 | orchestrator | 2026-04-05 02:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:51.952276 | orchestrator | 2026-04-05 02:00:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:51.953938 | orchestrator | 2026-04-05 02:00:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:51.954002 | orchestrator | 2026-04-05 02:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:54.997796 | orchestrator | 2026-04-05 02:00:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:54.998970 | orchestrator | 2026-04-05 02:00:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:54.999007 | orchestrator | 2026-04-05 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:00:58.047478 | orchestrator | 2026-04-05 02:00:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:00:58.049310 | orchestrator | 2026-04-05 02:00:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:00:58.049357 | orchestrator | 2026-04-05 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:01.101024 | orchestrator | 2026-04-05 02:01:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:01.102922 | orchestrator | 2026-04-05 02:01:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:01.102974 | orchestrator | 2026-04-05 02:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:04.140224 | orchestrator | 2026-04-05 02:01:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:04.141268 | orchestrator | 2026-04-05 02:01:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:04.141846 | orchestrator | 2026-04-05 02:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:07.185227 | orchestrator | 2026-04-05 02:01:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:07.185908 | orchestrator | 2026-04-05 02:01:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:07.185950 | orchestrator | 2026-04-05 02:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:10.234204 | orchestrator | 2026-04-05 02:01:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:10.235645 | orchestrator | 2026-04-05 02:01:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:10.235712 | orchestrator | 2026-04-05 02:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:13.279785 | orchestrator | 2026-04-05 02:01:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:13.282583 | orchestrator | 2026-04-05 02:01:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:13.282643 | orchestrator | 2026-04-05 02:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:16.324745 | orchestrator | 2026-04-05 02:01:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:16.325402 | orchestrator | 2026-04-05 02:01:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:16.325448 | orchestrator | 2026-04-05 02:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:19.372487 | orchestrator | 2026-04-05 02:01:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:19.374002 | orchestrator | 2026-04-05 02:01:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:19.374103 | orchestrator | 2026-04-05 02:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:22.422576 | orchestrator | 2026-04-05 02:01:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:22.422814 | orchestrator | 2026-04-05 02:01:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:22.422965 | orchestrator | 2026-04-05 02:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:25.472486 | orchestrator | 2026-04-05 02:01:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:25.475544 | orchestrator | 2026-04-05 02:01:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:25.475656 | orchestrator | 2026-04-05 02:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:28.523393 | orchestrator | 2026-04-05 02:01:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:28.525926 | orchestrator | 2026-04-05 02:01:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:28.526211 | orchestrator | 2026-04-05 02:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:31.567843 | orchestrator | 2026-04-05 02:01:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:31.569420 | orchestrator | 2026-04-05 02:01:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:31.569560 | orchestrator | 2026-04-05 02:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:34.616574 | orchestrator | 2026-04-05 02:01:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:34.616825 | orchestrator | 2026-04-05 02:01:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:34.617429 | orchestrator | 2026-04-05 02:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:37.663864 | orchestrator | 2026-04-05 02:01:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:37.664880 | orchestrator | 2026-04-05 02:01:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:37.664919 | orchestrator | 2026-04-05 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:40.717063 | orchestrator | 2026-04-05 02:01:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:40.718895 | orchestrator | 2026-04-05 02:01:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:40.718956 | orchestrator | 2026-04-05 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:43.773870 | orchestrator | 2026-04-05 02:01:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:43.776296 | orchestrator | 2026-04-05 02:01:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:43.776385 | orchestrator | 2026-04-05 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:46.828256 | orchestrator | 2026-04-05 02:01:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:46.829826 | orchestrator | 2026-04-05 02:01:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:46.829861 | orchestrator | 2026-04-05 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:49.883283 | orchestrator | 2026-04-05 02:01:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:49.885013 | orchestrator | 2026-04-05 02:01:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:49.885479 | orchestrator | 2026-04-05 02:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:52.940351 | orchestrator | 2026-04-05 02:01:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:52.942809 | orchestrator | 2026-04-05 02:01:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:52.942900 | orchestrator | 2026-04-05 02:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:55.989221 | orchestrator | 2026-04-05 02:01:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:55.991408 | orchestrator | 2026-04-05 02:01:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:55.991465 | orchestrator | 2026-04-05 02:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:01:59.047677 | orchestrator | 2026-04-05 02:01:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:01:59.049306 | orchestrator | 2026-04-05 02:01:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:01:59.049363 | orchestrator | 2026-04-05 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:02.100149 | orchestrator | 2026-04-05 02:02:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:02.102678 | orchestrator | 2026-04-05 02:02:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:02.102788 | orchestrator | 2026-04-05 02:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:05.146753 | orchestrator | 2026-04-05 02:02:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:05.147034 | orchestrator | 2026-04-05 02:02:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:05.147074 | orchestrator | 2026-04-05 02:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:08.193596 | orchestrator | 2026-04-05 02:02:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:08.195259 | orchestrator | 2026-04-05 02:02:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:08.195345 | orchestrator | 2026-04-05 02:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:11.242383 | orchestrator | 2026-04-05 02:02:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:11.244845 | orchestrator | 2026-04-05 02:02:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:11.244907 | orchestrator | 2026-04-05 02:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:14.291466 | orchestrator | 2026-04-05 02:02:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:14.295057 | orchestrator | 2026-04-05 02:02:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:14.295169 | orchestrator | 2026-04-05 02:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:17.350597 | orchestrator | 2026-04-05 02:02:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:17.352442 | orchestrator | 2026-04-05 02:02:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:17.352496 | orchestrator | 2026-04-05 02:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:20.400097 | orchestrator | 2026-04-05 02:02:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:20.401716 | orchestrator | 2026-04-05 02:02:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:20.401747 | orchestrator | 2026-04-05 02:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:23.445874 | orchestrator | 2026-04-05 02:02:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:23.448214 | orchestrator | 2026-04-05 02:02:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:23.448322 | orchestrator | 2026-04-05 02:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:26.495165 | orchestrator | 2026-04-05 02:02:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:26.496495 | orchestrator | 2026-04-05 02:02:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:26.496582 | orchestrator | 2026-04-05 02:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:29.545922 | orchestrator | 2026-04-05 02:02:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:29.546488 | orchestrator | 2026-04-05 02:02:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:29.546610 | orchestrator | 2026-04-05 02:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:32.587817 | orchestrator | 2026-04-05 02:02:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:32.589498 | orchestrator | 2026-04-05 02:02:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:32.589567 | orchestrator | 2026-04-05 02:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:35.638148 | orchestrator | 2026-04-05 02:02:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:35.639859 | orchestrator | 2026-04-05 02:02:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:35.639919 | orchestrator | 2026-04-05 02:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:38.691282 | orchestrator | 2026-04-05 02:02:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:38.694276 | orchestrator | 2026-04-05 02:02:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:38.694549 | orchestrator | 2026-04-05 02:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:41.737149 | orchestrator | 2026-04-05 02:02:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:41.738502 | orchestrator | 2026-04-05 02:02:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:41.738581 | orchestrator | 2026-04-05 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:44.784048 | orchestrator | 2026-04-05 02:02:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:44.785673 | orchestrator | 2026-04-05 02:02:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:44.785763 | orchestrator | 2026-04-05 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:47.838430 | orchestrator | 2026-04-05 02:02:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:47.840401 | orchestrator | 2026-04-05 02:02:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:47.840515 | orchestrator | 2026-04-05 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:50.892388 | orchestrator | 2026-04-05 02:02:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:50.893828 | orchestrator | 2026-04-05 02:02:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:50.893917 | orchestrator | 2026-04-05 02:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:53.951639 | orchestrator | 2026-04-05 02:02:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:53.953201 | orchestrator | 2026-04-05 02:02:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:53.953246 | orchestrator | 2026-04-05 02:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:02:57.000809 | orchestrator | 2026-04-05 02:02:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:02:57.002121 | orchestrator | 2026-04-05 02:02:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:02:57.002171 | orchestrator | 2026-04-05 02:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:00.053783 | orchestrator | 2026-04-05 02:03:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:00.056345 | orchestrator | 2026-04-05 02:03:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:00.056491 | orchestrator | 2026-04-05 02:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:03.113668 | orchestrator | 2026-04-05 02:03:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:03.117004 | orchestrator | 2026-04-05 02:03:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:03.117108 | orchestrator | 2026-04-05 02:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:06.167667 | orchestrator | 2026-04-05 02:03:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:06.169654 | orchestrator | 2026-04-05 02:03:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:06.169758 | orchestrator | 2026-04-05 02:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:09.217627 | orchestrator | 2026-04-05 02:03:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:09.219323 | orchestrator | 2026-04-05 02:03:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:09.219364 | orchestrator | 2026-04-05 02:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:12.271368 | orchestrator | 2026-04-05 02:03:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:12.274007 | orchestrator | 2026-04-05 02:03:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:12.274108 | orchestrator | 2026-04-05 02:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:15.323428 | orchestrator | 2026-04-05 02:03:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:15.325840 | orchestrator | 2026-04-05 02:03:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:15.325902 | orchestrator | 2026-04-05 02:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:18.370518 | orchestrator | 2026-04-05 02:03:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:18.375814 | orchestrator | 2026-04-05 02:03:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:18.375916 | orchestrator | 2026-04-05 02:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:21.420167 | orchestrator | 2026-04-05 02:03:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:21.424293 | orchestrator | 2026-04-05 02:03:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:21.424791 | orchestrator | 2026-04-05 02:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:24.478870 | orchestrator | 2026-04-05 02:03:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:24.480721 | orchestrator | 2026-04-05 02:03:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:24.480763 | orchestrator | 2026-04-05 02:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:27.523137 | orchestrator | 2026-04-05 02:03:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:27.523606 | orchestrator | 2026-04-05 02:03:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:27.523844 | orchestrator | 2026-04-05 02:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:30.576383 | orchestrator | 2026-04-05 02:03:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:30.578886 | orchestrator | 2026-04-05 02:03:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:30.578938 | orchestrator | 2026-04-05 02:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:33.631000 | orchestrator | 2026-04-05 02:03:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:33.632178 | orchestrator | 2026-04-05 02:03:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:33.632210 | orchestrator | 2026-04-05 02:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:36.688997 | orchestrator | 2026-04-05 02:03:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:36.692006 | orchestrator | 2026-04-05 02:03:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:36.692085 | orchestrator | 2026-04-05 02:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:39.736314 | orchestrator | 2026-04-05 02:03:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:39.736409 | orchestrator | 2026-04-05 02:03:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:39.736465 | orchestrator | 2026-04-05 02:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:42.776945 | orchestrator | 2026-04-05 02:03:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:42.778910 | orchestrator | 2026-04-05 02:03:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:42.778950 | orchestrator | 2026-04-05 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:45.822096 | orchestrator | 2026-04-05 02:03:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:45.823651 | orchestrator | 2026-04-05 02:03:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:45.823745 | orchestrator | 2026-04-05 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:48.869127 | orchestrator | 2026-04-05 02:03:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:48.870748 | orchestrator | 2026-04-05 02:03:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:48.870787 | orchestrator | 2026-04-05 02:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:51.910212 | orchestrator | 2026-04-05 02:03:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:51.911778 | orchestrator | 2026-04-05 02:03:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:51.911817 | orchestrator | 2026-04-05 02:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:54.964007 | orchestrator | 2026-04-05 02:03:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:54.965818 | orchestrator | 2026-04-05 02:03:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:54.965945 | orchestrator | 2026-04-05 02:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:03:58.013135 | orchestrator | 2026-04-05 02:03:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:03:58.015350 | orchestrator | 2026-04-05 02:03:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:03:58.015525 | orchestrator | 2026-04-05 02:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:01.062188 | orchestrator | 2026-04-05 02:04:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:01.064490 | orchestrator | 2026-04-05 02:04:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:01.064544 | orchestrator | 2026-04-05 02:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:04.109898 | orchestrator | 2026-04-05 02:04:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:04.112285 | orchestrator | 2026-04-05 02:04:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:04.112382 | orchestrator | 2026-04-05 02:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:07.160491 | orchestrator | 2026-04-05 02:04:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:07.160646 | orchestrator | 2026-04-05 02:04:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:07.160658 | orchestrator | 2026-04-05 02:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:10.210352 | orchestrator | 2026-04-05 02:04:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:10.211077 | orchestrator | 2026-04-05 02:04:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:10.211103 | orchestrator | 2026-04-05 02:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:13.262794 | orchestrator | 2026-04-05 02:04:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:13.264517 | orchestrator | 2026-04-05 02:04:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:13.264665 | orchestrator | 2026-04-05 02:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:16.316195 | orchestrator | 2026-04-05 02:04:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:16.319065 | orchestrator | 2026-04-05 02:04:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:16.319149 | orchestrator | 2026-04-05 02:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:19.376135 | orchestrator | 2026-04-05 02:04:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:19.377505 | orchestrator | 2026-04-05 02:04:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:19.377726 | orchestrator | 2026-04-05 02:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:22.434558 | orchestrator | 2026-04-05 02:04:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:22.436314 | orchestrator | 2026-04-05 02:04:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:22.436349 | orchestrator | 2026-04-05 02:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:25.490344 | orchestrator | 2026-04-05 02:04:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:25.492591 | orchestrator | 2026-04-05 02:04:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:25.492648 | orchestrator | 2026-04-05 02:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:28.546584 | orchestrator | 2026-04-05 02:04:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:28.548412 | orchestrator | 2026-04-05 02:04:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:28.548470 | orchestrator | 2026-04-05 02:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:31.589957 | orchestrator | 2026-04-05 02:04:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:31.593382 | orchestrator | 2026-04-05 02:04:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:31.593443 | orchestrator | 2026-04-05 02:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:34.639540 | orchestrator | 2026-04-05 02:04:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:34.641251 | orchestrator | 2026-04-05 02:04:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:34.641299 | orchestrator | 2026-04-05 02:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:37.688760 | orchestrator | 2026-04-05 02:04:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:37.690301 | orchestrator | 2026-04-05 02:04:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:37.690361 | orchestrator | 2026-04-05 02:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:40.732133 | orchestrator | 2026-04-05 02:04:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:40.733120 | orchestrator | 2026-04-05 02:04:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:40.733144 | orchestrator | 2026-04-05 02:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:43.779056 | orchestrator | 2026-04-05 02:04:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:43.780082 | orchestrator | 2026-04-05 02:04:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:43.780106 | orchestrator | 2026-04-05 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:46.824565 | orchestrator | 2026-04-05 02:04:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:46.826487 | orchestrator | 2026-04-05 02:04:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:46.826734 | orchestrator | 2026-04-05 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:49.873814 | orchestrator | 2026-04-05 02:04:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:49.874501 | orchestrator | 2026-04-05 02:04:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:49.874569 | orchestrator | 2026-04-05 02:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:52.921130 | orchestrator | 2026-04-05 02:04:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:52.921493 | orchestrator | 2026-04-05 02:04:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:52.921778 | orchestrator | 2026-04-05 02:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:55.963396 | orchestrator | 2026-04-05 02:04:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:55.964580 | orchestrator | 2026-04-05 02:04:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:55.964643 | orchestrator | 2026-04-05 02:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:04:59.014599 | orchestrator | 2026-04-05 02:04:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:04:59.015810 | orchestrator | 2026-04-05 02:04:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:04:59.015852 | orchestrator | 2026-04-05 02:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:02.059743 | orchestrator | 2026-04-05 02:05:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:02.061322 | orchestrator | 2026-04-05 02:05:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:02.061435 | orchestrator | 2026-04-05 02:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:05.110344 | orchestrator | 2026-04-05 02:05:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:05.111639 | orchestrator | 2026-04-05 02:05:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:05.111680 | orchestrator | 2026-04-05 02:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:08.162673 | orchestrator | 2026-04-05 02:05:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:08.164806 | orchestrator | 2026-04-05 02:05:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:08.164843 | orchestrator | 2026-04-05 02:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:11.211239 | orchestrator | 2026-04-05 02:05:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:11.213525 | orchestrator | 2026-04-05 02:05:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:11.213575 | orchestrator | 2026-04-05 02:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:14.258274 | orchestrator | 2026-04-05 02:05:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:14.258764 | orchestrator | 2026-04-05 02:05:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:14.258792 | orchestrator | 2026-04-05 02:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:17.300558 | orchestrator | 2026-04-05 02:05:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:17.301187 | orchestrator | 2026-04-05 02:05:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:17.301319 | orchestrator | 2026-04-05 02:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:20.348989 | orchestrator | 2026-04-05 02:05:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:20.349256 | orchestrator | 2026-04-05 02:05:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:20.349283 | orchestrator | 2026-04-05 02:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:23.387112 | orchestrator | 2026-04-05 02:05:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:23.388082 | orchestrator | 2026-04-05 02:05:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:23.388114 | orchestrator | 2026-04-05 02:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:26.433601 | orchestrator | 2026-04-05 02:05:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:26.434445 | orchestrator | 2026-04-05 02:05:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:26.434476 | orchestrator | 2026-04-05 02:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:29.485531 | orchestrator | 2026-04-05 02:05:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:29.487992 | orchestrator | 2026-04-05 02:05:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:29.488310 | orchestrator | 2026-04-05 02:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:32.535230 | orchestrator | 2026-04-05 02:05:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:32.535942 | orchestrator | 2026-04-05 02:05:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:32.535975 | orchestrator | 2026-04-05 02:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:35.587901 | orchestrator | 2026-04-05 02:05:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:35.589532 | orchestrator | 2026-04-05 02:05:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:35.589586 | orchestrator | 2026-04-05 02:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:38.633864 | orchestrator | 2026-04-05 02:05:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:38.636654 | orchestrator | 2026-04-05 02:05:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:38.636744 | orchestrator | 2026-04-05 02:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:41.690730 | orchestrator | 2026-04-05 02:05:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:41.692015 | orchestrator | 2026-04-05 02:05:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:41.692069 | orchestrator | 2026-04-05 02:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:44.746510 | orchestrator | 2026-04-05 02:05:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:44.748819 | orchestrator | 2026-04-05 02:05:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:44.748861 | orchestrator | 2026-04-05 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:47.799255 | orchestrator | 2026-04-05 02:05:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:47.800509 | orchestrator | 2026-04-05 02:05:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:47.800539 | orchestrator | 2026-04-05 02:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:50.850883 | orchestrator | 2026-04-05 02:05:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:50.853399 | orchestrator | 2026-04-05 02:05:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:50.853455 | orchestrator | 2026-04-05 02:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:53.898127 | orchestrator | 2026-04-05 02:05:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:53.898641 | orchestrator | 2026-04-05 02:05:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:53.898822 | orchestrator | 2026-04-05 02:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:56.941782 | orchestrator | 2026-04-05 02:05:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:56.942798 | orchestrator | 2026-04-05 02:05:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:56.942844 | orchestrator | 2026-04-05 02:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:05:59.993131 | orchestrator | 2026-04-05 02:05:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:05:59.994358 | orchestrator | 2026-04-05 02:05:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:05:59.994394 | orchestrator | 2026-04-05 02:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:03.046362 | orchestrator | 2026-04-05 02:06:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:03.047984 | orchestrator | 2026-04-05 02:06:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:03.048040 | orchestrator | 2026-04-05 02:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:06.096317 | orchestrator | 2026-04-05 02:06:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:06.098286 | orchestrator | 2026-04-05 02:06:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:06.098352 | orchestrator | 2026-04-05 02:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:09.149254 | orchestrator | 2026-04-05 02:06:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:09.151447 | orchestrator | 2026-04-05 02:06:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:09.151474 | orchestrator | 2026-04-05 02:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:12.193933 | orchestrator | 2026-04-05 02:06:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:12.195828 | orchestrator | 2026-04-05 02:06:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:12.195875 | orchestrator | 2026-04-05 02:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:15.241837 | orchestrator | 2026-04-05 02:06:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:15.245061 | orchestrator | 2026-04-05 02:06:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:15.245125 | orchestrator | 2026-04-05 02:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:18.297168 | orchestrator | 2026-04-05 02:06:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:18.299567 | orchestrator | 2026-04-05 02:06:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:18.299811 | orchestrator | 2026-04-05 02:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:21.352932 | orchestrator | 2026-04-05 02:06:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:21.354533 | orchestrator | 2026-04-05 02:06:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:21.354568 | orchestrator | 2026-04-05 02:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:24.406484 | orchestrator | 2026-04-05 02:06:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:24.409333 | orchestrator | 2026-04-05 02:06:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:24.410103 | orchestrator | 2026-04-05 02:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:27.458961 | orchestrator | 2026-04-05 02:06:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:27.462147 | orchestrator | 2026-04-05 02:06:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:27.462237 | orchestrator | 2026-04-05 02:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:30.516079 | orchestrator | 2026-04-05 02:06:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:30.519109 | orchestrator | 2026-04-05 02:06:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:30.519172 | orchestrator | 2026-04-05 02:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:33.566322 | orchestrator | 2026-04-05 02:06:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:33.567920 | orchestrator | 2026-04-05 02:06:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:33.567954 | orchestrator | 2026-04-05 02:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:36.622011 | orchestrator | 2026-04-05 02:06:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:36.624151 | orchestrator | 2026-04-05 02:06:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:36.624205 | orchestrator | 2026-04-05 02:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:39.672492 | orchestrator | 2026-04-05 02:06:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:39.674475 | orchestrator | 2026-04-05 02:06:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:39.674533 | orchestrator | 2026-04-05 02:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:42.726382 | orchestrator | 2026-04-05 02:06:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:42.727271 | orchestrator | 2026-04-05 02:06:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:42.727515 | orchestrator | 2026-04-05 02:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:45.778750 | orchestrator | 2026-04-05 02:06:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:45.781148 | orchestrator | 2026-04-05 02:06:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:45.781207 | orchestrator | 2026-04-05 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:48.828145 | orchestrator | 2026-04-05 02:06:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:48.830405 | orchestrator | 2026-04-05 02:06:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:48.830453 | orchestrator | 2026-04-05 02:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:51.877232 | orchestrator | 2026-04-05 02:06:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:51.880123 | orchestrator | 2026-04-05 02:06:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:51.880198 | orchestrator | 2026-04-05 02:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:54.936889 | orchestrator | 2026-04-05 02:06:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:54.938757 | orchestrator | 2026-04-05 02:06:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:54.938797 | orchestrator | 2026-04-05 02:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:06:57.993550 | orchestrator | 2026-04-05 02:06:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:06:57.996149 | orchestrator | 2026-04-05 02:06:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:06:57.996219 | orchestrator | 2026-04-05 02:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:01.048401 | orchestrator | 2026-04-05 02:07:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:01.051043 | orchestrator | 2026-04-05 02:07:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:01.051097 | orchestrator | 2026-04-05 02:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:04.099816 | orchestrator | 2026-04-05 02:07:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:04.101447 | orchestrator | 2026-04-05 02:07:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:04.101526 | orchestrator | 2026-04-05 02:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:07.150437 | orchestrator | 2026-04-05 02:07:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:07.151802 | orchestrator | 2026-04-05 02:07:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:07.151919 | orchestrator | 2026-04-05 02:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:10.194635 | orchestrator | 2026-04-05 02:07:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:10.195832 | orchestrator | 2026-04-05 02:07:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:10.195855 | orchestrator | 2026-04-05 02:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:13.242290 | orchestrator | 2026-04-05 02:07:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:13.247065 | orchestrator | 2026-04-05 02:07:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:13.247406 | orchestrator | 2026-04-05 02:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:16.296250 | orchestrator | 2026-04-05 02:07:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:16.298133 | orchestrator | 2026-04-05 02:07:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:16.298171 | orchestrator | 2026-04-05 02:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:19.347395 | orchestrator | 2026-04-05 02:07:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:19.350327 | orchestrator | 2026-04-05 02:07:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:19.350381 | orchestrator | 2026-04-05 02:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:22.402972 | orchestrator | 2026-04-05 02:07:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:22.405006 | orchestrator | 2026-04-05 02:07:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:22.405042 | orchestrator | 2026-04-05 02:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:25.461036 | orchestrator | 2026-04-05 02:07:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:25.464200 | orchestrator | 2026-04-05 02:07:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:25.464282 | orchestrator | 2026-04-05 02:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:28.516185 | orchestrator | 2026-04-05 02:07:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:28.518081 | orchestrator | 2026-04-05 02:07:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:28.518136 | orchestrator | 2026-04-05 02:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:31.567407 | orchestrator | 2026-04-05 02:07:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:31.569033 | orchestrator | 2026-04-05 02:07:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:31.569259 | orchestrator | 2026-04-05 02:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:34.621197 | orchestrator | 2026-04-05 02:07:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:34.623171 | orchestrator | 2026-04-05 02:07:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:34.623203 | orchestrator | 2026-04-05 02:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:37.671074 | orchestrator | 2026-04-05 02:07:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:37.673622 | orchestrator | 2026-04-05 02:07:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:37.673746 | orchestrator | 2026-04-05 02:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:40.722952 | orchestrator | 2026-04-05 02:07:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:40.725161 | orchestrator | 2026-04-05 02:07:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:40.725333 | orchestrator | 2026-04-05 02:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:43.770193 | orchestrator | 2026-04-05 02:07:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:43.771554 | orchestrator | 2026-04-05 02:07:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:43.771979 | orchestrator | 2026-04-05 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:46.828339 | orchestrator | 2026-04-05 02:07:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:46.831009 | orchestrator | 2026-04-05 02:07:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:46.831083 | orchestrator | 2026-04-05 02:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:49.886409 | orchestrator | 2026-04-05 02:07:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:49.890720 | orchestrator | 2026-04-05 02:07:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:49.890807 | orchestrator | 2026-04-05 02:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:52.942082 | orchestrator | 2026-04-05 02:07:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:52.943429 | orchestrator | 2026-04-05 02:07:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:52.943483 | orchestrator | 2026-04-05 02:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:55.999317 | orchestrator | 2026-04-05 02:07:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:55.999429 | orchestrator | 2026-04-05 02:07:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:55.999442 | orchestrator | 2026-04-05 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:07:59.054533 | orchestrator | 2026-04-05 02:07:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:07:59.056931 | orchestrator | 2026-04-05 02:07:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:07:59.056968 | orchestrator | 2026-04-05 02:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:02.111795 | orchestrator | 2026-04-05 02:08:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:02.114369 | orchestrator | 2026-04-05 02:08:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:02.114448 | orchestrator | 2026-04-05 02:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:05.171921 | orchestrator | 2026-04-05 02:08:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:05.173391 | orchestrator | 2026-04-05 02:08:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:05.173540 | orchestrator | 2026-04-05 02:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:08.222901 | orchestrator | 2026-04-05 02:08:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:08.224825 | orchestrator | 2026-04-05 02:08:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:08.224896 | orchestrator | 2026-04-05 02:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:11.279155 | orchestrator | 2026-04-05 02:08:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:11.281242 | orchestrator | 2026-04-05 02:08:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:11.281331 | orchestrator | 2026-04-05 02:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:14.331191 | orchestrator | 2026-04-05 02:08:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:14.334598 | orchestrator | 2026-04-05 02:08:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:14.334676 | orchestrator | 2026-04-05 02:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:17.382733 | orchestrator | 2026-04-05 02:08:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:17.387148 | orchestrator | 2026-04-05 02:08:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:17.387591 | orchestrator | 2026-04-05 02:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:20.429621 | orchestrator | 2026-04-05 02:08:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:20.431171 | orchestrator | 2026-04-05 02:08:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:20.431228 | orchestrator | 2026-04-05 02:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:23.483595 | orchestrator | 2026-04-05 02:08:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:23.484800 | orchestrator | 2026-04-05 02:08:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:23.484894 | orchestrator | 2026-04-05 02:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:26.527002 | orchestrator | 2026-04-05 02:08:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:26.528888 | orchestrator | 2026-04-05 02:08:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:26.528961 | orchestrator | 2026-04-05 02:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:29.583249 | orchestrator | 2026-04-05 02:08:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:29.583968 | orchestrator | 2026-04-05 02:08:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:29.584014 | orchestrator | 2026-04-05 02:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:32.629503 | orchestrator | 2026-04-05 02:08:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:32.630102 | orchestrator | 2026-04-05 02:08:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:32.630207 | orchestrator | 2026-04-05 02:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:35.675906 | orchestrator | 2026-04-05 02:08:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:35.676952 | orchestrator | 2026-04-05 02:08:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:35.676990 | orchestrator | 2026-04-05 02:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:38.724351 | orchestrator | 2026-04-05 02:08:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:38.726946 | orchestrator | 2026-04-05 02:08:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:38.727007 | orchestrator | 2026-04-05 02:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:41.776140 | orchestrator | 2026-04-05 02:08:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:41.777827 | orchestrator | 2026-04-05 02:08:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:41.777947 | orchestrator | 2026-04-05 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:44.826992 | orchestrator | 2026-04-05 02:08:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:44.828590 | orchestrator | 2026-04-05 02:08:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:44.828650 | orchestrator | 2026-04-05 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:47.879332 | orchestrator | 2026-04-05 02:08:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:47.881393 | orchestrator | 2026-04-05 02:08:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:47.881485 | orchestrator | 2026-04-05 02:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:50.936448 | orchestrator | 2026-04-05 02:08:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:50.938445 | orchestrator | 2026-04-05 02:08:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:50.938530 | orchestrator | 2026-04-05 02:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:53.993910 | orchestrator | 2026-04-05 02:08:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:53.995785 | orchestrator | 2026-04-05 02:08:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:53.995894 | orchestrator | 2026-04-05 02:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:08:57.050284 | orchestrator | 2026-04-05 02:08:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:08:57.052630 | orchestrator | 2026-04-05 02:08:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:08:57.052750 | orchestrator | 2026-04-05 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:00.103820 | orchestrator | 2026-04-05 02:09:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:00.105316 | orchestrator | 2026-04-05 02:09:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:00.105375 | orchestrator | 2026-04-05 02:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:03.150419 | orchestrator | 2026-04-05 02:09:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:03.150912 | orchestrator | 2026-04-05 02:09:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:03.150945 | orchestrator | 2026-04-05 02:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:06.207064 | orchestrator | 2026-04-05 02:09:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:06.208825 | orchestrator | 2026-04-05 02:09:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:06.208873 | orchestrator | 2026-04-05 02:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:09.262510 | orchestrator | 2026-04-05 02:09:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:09.263588 | orchestrator | 2026-04-05 02:09:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:09.263655 | orchestrator | 2026-04-05 02:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:12.314326 | orchestrator | 2026-04-05 02:09:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:12.316611 | orchestrator | 2026-04-05 02:09:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:12.316647 | orchestrator | 2026-04-05 02:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:15.365501 | orchestrator | 2026-04-05 02:09:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:15.366435 | orchestrator | 2026-04-05 02:09:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:15.366478 | orchestrator | 2026-04-05 02:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:18.421478 | orchestrator | 2026-04-05 02:09:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:18.423747 | orchestrator | 2026-04-05 02:09:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:18.423851 | orchestrator | 2026-04-05 02:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:21.480104 | orchestrator | 2026-04-05 02:09:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:21.482139 | orchestrator | 2026-04-05 02:09:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:21.482237 | orchestrator | 2026-04-05 02:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:24.529808 | orchestrator | 2026-04-05 02:09:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:24.530923 | orchestrator | 2026-04-05 02:09:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:24.531009 | orchestrator | 2026-04-05 02:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:27.577401 | orchestrator | 2026-04-05 02:09:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:27.579143 | orchestrator | 2026-04-05 02:09:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:27.579418 | orchestrator | 2026-04-05 02:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:30.625431 | orchestrator | 2026-04-05 02:09:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:30.626320 | orchestrator | 2026-04-05 02:09:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:30.626348 | orchestrator | 2026-04-05 02:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:33.672999 | orchestrator | 2026-04-05 02:09:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:33.674532 | orchestrator | 2026-04-05 02:09:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:33.675004 | orchestrator | 2026-04-05 02:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:36.726109 | orchestrator | 2026-04-05 02:09:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:36.728348 | orchestrator | 2026-04-05 02:09:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:36.728399 | orchestrator | 2026-04-05 02:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:39.772271 | orchestrator | 2026-04-05 02:09:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:39.776327 | orchestrator | 2026-04-05 02:09:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:39.776848 | orchestrator | 2026-04-05 02:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:42.822484 | orchestrator | 2026-04-05 02:09:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:42.823849 | orchestrator | 2026-04-05 02:09:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:42.823902 | orchestrator | 2026-04-05 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:45.866358 | orchestrator | 2026-04-05 02:09:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:45.868876 | orchestrator | 2026-04-05 02:09:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:45.868959 | orchestrator | 2026-04-05 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:48.918653 | orchestrator | 2026-04-05 02:09:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:48.920194 | orchestrator | 2026-04-05 02:09:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:48.920258 | orchestrator | 2026-04-05 02:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:51.966248 | orchestrator | 2026-04-05 02:09:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:51.966659 | orchestrator | 2026-04-05 02:09:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:51.966778 | orchestrator | 2026-04-05 02:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:55.020499 | orchestrator | 2026-04-05 02:09:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:55.021499 | orchestrator | 2026-04-05 02:09:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:55.021576 | orchestrator | 2026-04-05 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:09:58.067055 | orchestrator | 2026-04-05 02:09:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:09:58.067210 | orchestrator | 2026-04-05 02:09:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:09:58.067352 | orchestrator | 2026-04-05 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:01.111177 | orchestrator | 2026-04-05 02:10:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:01.111315 | orchestrator | 2026-04-05 02:10:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:01.111331 | orchestrator | 2026-04-05 02:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:04.154921 | orchestrator | 2026-04-05 02:10:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:04.156601 | orchestrator | 2026-04-05 02:10:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:04.156653 | orchestrator | 2026-04-05 02:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:07.198626 | orchestrator | 2026-04-05 02:10:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:07.200311 | orchestrator | 2026-04-05 02:10:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:07.200362 | orchestrator | 2026-04-05 02:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:10.240233 | orchestrator | 2026-04-05 02:10:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:10.242125 | orchestrator | 2026-04-05 02:10:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:10.242210 | orchestrator | 2026-04-05 02:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:13.282606 | orchestrator | 2026-04-05 02:10:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:13.284932 | orchestrator | 2026-04-05 02:10:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:13.285067 | orchestrator | 2026-04-05 02:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:16.325354 | orchestrator | 2026-04-05 02:10:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:16.326878 | orchestrator | 2026-04-05 02:10:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:16.326970 | orchestrator | 2026-04-05 02:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:19.373882 | orchestrator | 2026-04-05 02:10:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:19.374655 | orchestrator | 2026-04-05 02:10:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:19.374692 | orchestrator | 2026-04-05 02:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:22.408898 | orchestrator | 2026-04-05 02:10:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:22.411021 | orchestrator | 2026-04-05 02:10:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:22.411080 | orchestrator | 2026-04-05 02:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:25.448103 | orchestrator | 2026-04-05 02:10:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:25.448598 | orchestrator | 2026-04-05 02:10:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:25.448619 | orchestrator | 2026-04-05 02:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:28.484989 | orchestrator | 2026-04-05 02:10:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:28.486302 | orchestrator | 2026-04-05 02:10:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:28.486358 | orchestrator | 2026-04-05 02:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:31.520598 | orchestrator | 2026-04-05 02:10:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:31.524646 | orchestrator | 2026-04-05 02:10:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:31.524751 | orchestrator | 2026-04-05 02:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:34.571743 | orchestrator | 2026-04-05 02:10:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:34.574403 | orchestrator | 2026-04-05 02:10:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:34.574564 | orchestrator | 2026-04-05 02:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:37.625995 | orchestrator | 2026-04-05 02:10:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:37.627592 | orchestrator | 2026-04-05 02:10:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:37.627667 | orchestrator | 2026-04-05 02:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:40.680566 | orchestrator | 2026-04-05 02:10:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:40.682803 | orchestrator | 2026-04-05 02:10:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:40.682930 | orchestrator | 2026-04-05 02:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:43.732170 | orchestrator | 2026-04-05 02:10:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:43.733913 | orchestrator | 2026-04-05 02:10:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:43.734121 | orchestrator | 2026-04-05 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:46.777649 | orchestrator | 2026-04-05 02:10:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:46.778992 | orchestrator | 2026-04-05 02:10:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:46.779051 | orchestrator | 2026-04-05 02:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:49.818360 | orchestrator | 2026-04-05 02:10:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:49.819094 | orchestrator | 2026-04-05 02:10:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:49.819140 | orchestrator | 2026-04-05 02:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:52.866665 | orchestrator | 2026-04-05 02:10:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:52.868176 | orchestrator | 2026-04-05 02:10:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:52.868243 | orchestrator | 2026-04-05 02:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:55.913204 | orchestrator | 2026-04-05 02:10:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:55.913953 | orchestrator | 2026-04-05 02:10:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:55.914137 | orchestrator | 2026-04-05 02:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:10:58.960674 | orchestrator | 2026-04-05 02:10:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:10:58.962360 | orchestrator | 2026-04-05 02:10:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:10:58.962491 | orchestrator | 2026-04-05 02:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:02.013166 | orchestrator | 2026-04-05 02:11:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:02.016639 | orchestrator | 2026-04-05 02:11:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:02.016726 | orchestrator | 2026-04-05 02:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:05.061739 | orchestrator | 2026-04-05 02:11:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:05.062484 | orchestrator | 2026-04-05 02:11:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:05.062521 | orchestrator | 2026-04-05 02:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:08.117361 | orchestrator | 2026-04-05 02:11:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:08.120676 | orchestrator | 2026-04-05 02:11:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:08.120913 | orchestrator | 2026-04-05 02:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:11.174602 | orchestrator | 2026-04-05 02:11:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:11.174773 | orchestrator | 2026-04-05 02:11:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:11.174796 | orchestrator | 2026-04-05 02:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:14.217453 | orchestrator | 2026-04-05 02:11:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:14.220136 | orchestrator | 2026-04-05 02:11:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:14.220245 | orchestrator | 2026-04-05 02:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:17.265299 | orchestrator | 2026-04-05 02:11:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:17.267588 | orchestrator | 2026-04-05 02:11:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:17.267767 | orchestrator | 2026-04-05 02:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:20.318430 | orchestrator | 2026-04-05 02:11:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:20.322804 | orchestrator | 2026-04-05 02:11:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:20.322899 | orchestrator | 2026-04-05 02:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:23.376584 | orchestrator | 2026-04-05 02:11:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:23.376685 | orchestrator | 2026-04-05 02:11:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:23.376700 | orchestrator | 2026-04-05 02:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:26.421245 | orchestrator | 2026-04-05 02:11:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:26.422958 | orchestrator | 2026-04-05 02:11:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:26.423026 | orchestrator | 2026-04-05 02:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:29.469878 | orchestrator | 2026-04-05 02:11:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:29.471475 | orchestrator | 2026-04-05 02:11:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:29.471528 | orchestrator | 2026-04-05 02:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:32.513805 | orchestrator | 2026-04-05 02:11:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:32.515039 | orchestrator | 2026-04-05 02:11:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:32.515066 | orchestrator | 2026-04-05 02:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:35.570214 | orchestrator | 2026-04-05 02:11:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:35.572096 | orchestrator | 2026-04-05 02:11:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:35.572205 | orchestrator | 2026-04-05 02:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:38.619955 | orchestrator | 2026-04-05 02:11:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:38.622336 | orchestrator | 2026-04-05 02:11:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:38.622400 | orchestrator | 2026-04-05 02:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:41.673636 | orchestrator | 2026-04-05 02:11:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:41.674686 | orchestrator | 2026-04-05 02:11:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:41.674763 | orchestrator | 2026-04-05 02:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:44.725611 | orchestrator | 2026-04-05 02:11:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:44.726892 | orchestrator | 2026-04-05 02:11:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:44.726973 | orchestrator | 2026-04-05 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:47.777589 | orchestrator | 2026-04-05 02:11:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:47.778305 | orchestrator | 2026-04-05 02:11:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:47.778350 | orchestrator | 2026-04-05 02:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:50.828533 | orchestrator | 2026-04-05 02:11:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:50.830749 | orchestrator | 2026-04-05 02:11:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:50.830887 | orchestrator | 2026-04-05 02:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:53.876134 | orchestrator | 2026-04-05 02:11:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:53.877642 | orchestrator | 2026-04-05 02:11:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:53.877680 | orchestrator | 2026-04-05 02:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:56.927340 | orchestrator | 2026-04-05 02:11:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:56.927563 | orchestrator | 2026-04-05 02:11:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:56.927585 | orchestrator | 2026-04-05 02:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:11:59.975273 | orchestrator | 2026-04-05 02:11:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:11:59.977149 | orchestrator | 2026-04-05 02:11:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:11:59.977339 | orchestrator | 2026-04-05 02:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:03.023943 | orchestrator | 2026-04-05 02:12:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:03.025173 | orchestrator | 2026-04-05 02:12:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:03.025222 | orchestrator | 2026-04-05 02:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:06.069048 | orchestrator | 2026-04-05 02:12:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:06.071360 | orchestrator | 2026-04-05 02:12:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:06.071399 | orchestrator | 2026-04-05 02:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:09.120485 | orchestrator | 2026-04-05 02:12:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:09.122531 | orchestrator | 2026-04-05 02:12:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:09.122582 | orchestrator | 2026-04-05 02:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:12.169189 | orchestrator | 2026-04-05 02:12:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:12.169728 | orchestrator | 2026-04-05 02:12:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:12.169740 | orchestrator | 2026-04-05 02:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:15.213533 | orchestrator | 2026-04-05 02:12:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:15.215597 | orchestrator | 2026-04-05 02:12:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:15.215632 | orchestrator | 2026-04-05 02:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:18.269306 | orchestrator | 2026-04-05 02:12:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:18.271089 | orchestrator | 2026-04-05 02:12:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:18.271200 | orchestrator | 2026-04-05 02:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:21.317240 | orchestrator | 2026-04-05 02:12:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:21.320332 | orchestrator | 2026-04-05 02:12:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:21.320381 | orchestrator | 2026-04-05 02:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:24.367617 | orchestrator | 2026-04-05 02:12:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:24.369523 | orchestrator | 2026-04-05 02:12:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:24.369577 | orchestrator | 2026-04-05 02:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:27.425389 | orchestrator | 2026-04-05 02:12:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:27.427004 | orchestrator | 2026-04-05 02:12:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:27.427072 | orchestrator | 2026-04-05 02:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:30.469421 | orchestrator | 2026-04-05 02:12:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:30.470207 | orchestrator | 2026-04-05 02:12:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:30.470287 | orchestrator | 2026-04-05 02:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:33.516132 | orchestrator | 2026-04-05 02:12:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:33.518195 | orchestrator | 2026-04-05 02:12:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:33.518231 | orchestrator | 2026-04-05 02:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:36.571324 | orchestrator | 2026-04-05 02:12:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:36.573154 | orchestrator | 2026-04-05 02:12:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:36.573232 | orchestrator | 2026-04-05 02:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:39.625497 | orchestrator | 2026-04-05 02:12:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:39.626311 | orchestrator | 2026-04-05 02:12:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:39.626381 | orchestrator | 2026-04-05 02:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:42.674135 | orchestrator | 2026-04-05 02:12:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:42.675057 | orchestrator | 2026-04-05 02:12:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:42.675085 | orchestrator | 2026-04-05 02:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:45.719516 | orchestrator | 2026-04-05 02:12:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:45.720911 | orchestrator | 2026-04-05 02:12:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:45.720936 | orchestrator | 2026-04-05 02:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:48.765365 | orchestrator | 2026-04-05 02:12:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:48.767195 | orchestrator | 2026-04-05 02:12:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:48.767295 | orchestrator | 2026-04-05 02:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:51.811654 | orchestrator | 2026-04-05 02:12:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:51.812756 | orchestrator | 2026-04-05 02:12:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:51.812889 | orchestrator | 2026-04-05 02:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:54.856012 | orchestrator | 2026-04-05 02:12:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:54.856731 | orchestrator | 2026-04-05 02:12:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:54.856790 | orchestrator | 2026-04-05 02:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:12:57.911272 | orchestrator | 2026-04-05 02:12:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:12:57.911648 | orchestrator | 2026-04-05 02:12:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:12:57.911676 | orchestrator | 2026-04-05 02:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:00.958510 | orchestrator | 2026-04-05 02:13:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:00.959796 | orchestrator | 2026-04-05 02:13:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:00.959903 | orchestrator | 2026-04-05 02:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:03.999549 | orchestrator | 2026-04-05 02:13:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:04.003415 | orchestrator | 2026-04-05 02:13:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:04.003473 | orchestrator | 2026-04-05 02:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:07.055249 | orchestrator | 2026-04-05 02:13:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:07.057759 | orchestrator | 2026-04-05 02:13:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:07.057826 | orchestrator | 2026-04-05 02:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:10.111580 | orchestrator | 2026-04-05 02:13:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:10.114336 | orchestrator | 2026-04-05 02:13:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:10.114419 | orchestrator | 2026-04-05 02:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:13.171202 | orchestrator | 2026-04-05 02:13:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:13.173476 | orchestrator | 2026-04-05 02:13:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:13.173603 | orchestrator | 2026-04-05 02:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:16.224552 | orchestrator | 2026-04-05 02:13:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:16.227985 | orchestrator | 2026-04-05 02:13:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:16.228090 | orchestrator | 2026-04-05 02:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:19.286764 | orchestrator | 2026-04-05 02:13:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:19.290719 | orchestrator | 2026-04-05 02:13:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:19.290817 | orchestrator | 2026-04-05 02:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:22.342115 | orchestrator | 2026-04-05 02:13:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:22.343841 | orchestrator | 2026-04-05 02:13:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:22.343929 | orchestrator | 2026-04-05 02:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:25.395639 | orchestrator | 2026-04-05 02:13:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:25.400632 | orchestrator | 2026-04-05 02:13:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:25.400723 | orchestrator | 2026-04-05 02:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:28.454952 | orchestrator | 2026-04-05 02:13:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:28.457615 | orchestrator | 2026-04-05 02:13:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:28.458182 | orchestrator | 2026-04-05 02:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:31.505073 | orchestrator | 2026-04-05 02:13:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:31.506794 | orchestrator | 2026-04-05 02:13:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:31.507388 | orchestrator | 2026-04-05 02:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:34.556678 | orchestrator | 2026-04-05 02:13:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:34.558702 | orchestrator | 2026-04-05 02:13:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:34.558775 | orchestrator | 2026-04-05 02:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:37.614216 | orchestrator | 2026-04-05 02:13:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:37.616395 | orchestrator | 2026-04-05 02:13:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:37.616448 | orchestrator | 2026-04-05 02:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:40.671832 | orchestrator | 2026-04-05 02:13:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:40.673264 | orchestrator | 2026-04-05 02:13:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:40.673342 | orchestrator | 2026-04-05 02:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:43.726653 | orchestrator | 2026-04-05 02:13:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:43.730160 | orchestrator | 2026-04-05 02:13:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:43.730211 | orchestrator | 2026-04-05 02:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:46.783970 | orchestrator | 2026-04-05 02:13:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:46.787825 | orchestrator | 2026-04-05 02:13:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:46.787889 | orchestrator | 2026-04-05 02:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:49.830323 | orchestrator | 2026-04-05 02:13:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:49.831176 | orchestrator | 2026-04-05 02:13:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:49.831208 | orchestrator | 2026-04-05 02:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:52.880520 | orchestrator | 2026-04-05 02:13:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:52.882855 | orchestrator | 2026-04-05 02:13:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:52.882992 | orchestrator | 2026-04-05 02:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:55.921975 | orchestrator | 2026-04-05 02:13:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:55.923642 | orchestrator | 2026-04-05 02:13:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:55.923691 | orchestrator | 2026-04-05 02:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:13:58.968462 | orchestrator | 2026-04-05 02:13:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:13:58.969655 | orchestrator | 2026-04-05 02:13:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:13:58.969695 | orchestrator | 2026-04-05 02:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:02.015247 | orchestrator | 2026-04-05 02:14:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:02.017007 | orchestrator | 2026-04-05 02:14:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:02.017072 | orchestrator | 2026-04-05 02:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:05.068762 | orchestrator | 2026-04-05 02:14:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:05.070122 | orchestrator | 2026-04-05 02:14:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:05.070172 | orchestrator | 2026-04-05 02:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:08.115419 | orchestrator | 2026-04-05 02:14:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:08.117247 | orchestrator | 2026-04-05 02:14:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:08.117409 | orchestrator | 2026-04-05 02:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:11.166117 | orchestrator | 2026-04-05 02:14:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:11.168094 | orchestrator | 2026-04-05 02:14:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:11.168120 | orchestrator | 2026-04-05 02:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:14.220174 | orchestrator | 2026-04-05 02:14:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:14.220823 | orchestrator | 2026-04-05 02:14:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:14.220873 | orchestrator | 2026-04-05 02:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:17.269192 | orchestrator | 2026-04-05 02:14:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:17.271076 | orchestrator | 2026-04-05 02:14:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:17.271189 | orchestrator | 2026-04-05 02:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:20.320492 | orchestrator | 2026-04-05 02:14:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:20.322843 | orchestrator | 2026-04-05 02:14:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:20.323005 | orchestrator | 2026-04-05 02:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:23.375515 | orchestrator | 2026-04-05 02:14:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:23.378147 | orchestrator | 2026-04-05 02:14:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:23.378237 | orchestrator | 2026-04-05 02:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:26.431029 | orchestrator | 2026-04-05 02:14:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:26.431957 | orchestrator | 2026-04-05 02:14:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:26.432117 | orchestrator | 2026-04-05 02:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:29.490682 | orchestrator | 2026-04-05 02:14:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:29.492292 | orchestrator | 2026-04-05 02:14:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:29.492337 | orchestrator | 2026-04-05 02:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:32.540309 | orchestrator | 2026-04-05 02:14:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:32.542171 | orchestrator | 2026-04-05 02:14:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:32.542207 | orchestrator | 2026-04-05 02:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:35.589913 | orchestrator | 2026-04-05 02:14:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:35.591515 | orchestrator | 2026-04-05 02:14:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:35.591597 | orchestrator | 2026-04-05 02:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:38.630362 | orchestrator | 2026-04-05 02:14:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:38.632554 | orchestrator | 2026-04-05 02:14:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:38.632729 | orchestrator | 2026-04-05 02:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:41.682432 | orchestrator | 2026-04-05 02:14:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:41.685285 | orchestrator | 2026-04-05 02:14:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:41.685360 | orchestrator | 2026-04-05 02:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:44.737478 | orchestrator | 2026-04-05 02:14:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:44.738173 | orchestrator | 2026-04-05 02:14:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:44.738211 | orchestrator | 2026-04-05 02:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:47.787885 | orchestrator | 2026-04-05 02:14:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:47.791447 | orchestrator | 2026-04-05 02:14:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:47.791525 | orchestrator | 2026-04-05 02:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:50.845214 | orchestrator | 2026-04-05 02:14:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:50.847145 | orchestrator | 2026-04-05 02:14:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:50.847218 | orchestrator | 2026-04-05 02:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:53.898289 | orchestrator | 2026-04-05 02:14:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:53.900796 | orchestrator | 2026-04-05 02:14:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:53.901034 | orchestrator | 2026-04-05 02:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:56.948111 | orchestrator | 2026-04-05 02:14:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:14:56.951083 | orchestrator | 2026-04-05 02:14:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:14:56.951141 | orchestrator | 2026-04-05 02:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:14:59.997545 | orchestrator | 2026-04-05 02:14:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:00.000401 | orchestrator | 2026-04-05 02:15:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:00.000480 | orchestrator | 2026-04-05 02:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:03.061879 | orchestrator | 2026-04-05 02:15:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:03.061976 | orchestrator | 2026-04-05 02:15:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:03.061985 | orchestrator | 2026-04-05 02:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:06.110148 | orchestrator | 2026-04-05 02:15:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:06.112370 | orchestrator | 2026-04-05 02:15:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:06.112422 | orchestrator | 2026-04-05 02:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:09.167256 | orchestrator | 2026-04-05 02:15:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:09.168730 | orchestrator | 2026-04-05 02:15:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:09.168791 | orchestrator | 2026-04-05 02:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:12.224178 | orchestrator | 2026-04-05 02:15:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:12.226696 | orchestrator | 2026-04-05 02:15:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:12.226788 | orchestrator | 2026-04-05 02:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:15.282828 | orchestrator | 2026-04-05 02:15:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:15.285689 | orchestrator | 2026-04-05 02:15:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:15.285760 | orchestrator | 2026-04-05 02:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:18.331717 | orchestrator | 2026-04-05 02:15:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:18.333379 | orchestrator | 2026-04-05 02:15:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:18.333429 | orchestrator | 2026-04-05 02:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:21.379008 | orchestrator | 2026-04-05 02:15:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:21.382729 | orchestrator | 2026-04-05 02:15:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:21.382799 | orchestrator | 2026-04-05 02:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:24.433997 | orchestrator | 2026-04-05 02:15:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:24.435084 | orchestrator | 2026-04-05 02:15:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:24.435229 | orchestrator | 2026-04-05 02:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:27.483197 | orchestrator | 2026-04-05 02:15:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:27.484956 | orchestrator | 2026-04-05 02:15:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:27.485000 | orchestrator | 2026-04-05 02:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:30.539008 | orchestrator | 2026-04-05 02:15:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:30.540118 | orchestrator | 2026-04-05 02:15:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:30.540228 | orchestrator | 2026-04-05 02:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:33.589621 | orchestrator | 2026-04-05 02:15:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:33.592520 | orchestrator | 2026-04-05 02:15:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:33.592586 | orchestrator | 2026-04-05 02:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:36.643038 | orchestrator | 2026-04-05 02:15:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:36.644449 | orchestrator | 2026-04-05 02:15:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:36.644500 | orchestrator | 2026-04-05 02:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:39.695315 | orchestrator | 2026-04-05 02:15:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:39.697037 | orchestrator | 2026-04-05 02:15:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:39.697142 | orchestrator | 2026-04-05 02:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:42.745778 | orchestrator | 2026-04-05 02:15:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:42.747388 | orchestrator | 2026-04-05 02:15:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:42.747419 | orchestrator | 2026-04-05 02:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:45.804691 | orchestrator | 2026-04-05 02:15:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:45.806155 | orchestrator | 2026-04-05 02:15:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:45.806241 | orchestrator | 2026-04-05 02:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:48.863955 | orchestrator | 2026-04-05 02:15:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:48.866378 | orchestrator | 2026-04-05 02:15:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:48.866451 | orchestrator | 2026-04-05 02:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:51.916616 | orchestrator | 2026-04-05 02:15:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:51.918435 | orchestrator | 2026-04-05 02:15:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:51.918528 | orchestrator | 2026-04-05 02:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:54.967226 | orchestrator | 2026-04-05 02:15:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:54.968201 | orchestrator | 2026-04-05 02:15:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:54.968232 | orchestrator | 2026-04-05 02:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:15:58.022894 | orchestrator | 2026-04-05 02:15:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:15:58.025692 | orchestrator | 2026-04-05 02:15:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:15:58.025713 | orchestrator | 2026-04-05 02:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:01.070392 | orchestrator | 2026-04-05 02:16:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:01.071996 | orchestrator | 2026-04-05 02:16:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:01.072054 | orchestrator | 2026-04-05 02:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:04.127608 | orchestrator | 2026-04-05 02:16:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:04.129257 | orchestrator | 2026-04-05 02:16:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:04.129303 | orchestrator | 2026-04-05 02:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:07.174510 | orchestrator | 2026-04-05 02:16:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:07.175926 | orchestrator | 2026-04-05 02:16:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:07.175990 | orchestrator | 2026-04-05 02:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:10.222358 | orchestrator | 2026-04-05 02:16:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:10.223682 | orchestrator | 2026-04-05 02:16:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:10.223727 | orchestrator | 2026-04-05 02:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:13.274280 | orchestrator | 2026-04-05 02:16:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:13.276519 | orchestrator | 2026-04-05 02:16:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:13.276577 | orchestrator | 2026-04-05 02:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:16.331944 | orchestrator | 2026-04-05 02:16:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:16.333499 | orchestrator | 2026-04-05 02:16:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:16.333590 | orchestrator | 2026-04-05 02:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:19.382241 | orchestrator | 2026-04-05 02:16:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:19.383725 | orchestrator | 2026-04-05 02:16:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:19.383767 | orchestrator | 2026-04-05 02:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:22.436095 | orchestrator | 2026-04-05 02:16:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:22.437914 | orchestrator | 2026-04-05 02:16:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:22.437994 | orchestrator | 2026-04-05 02:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:25.492083 | orchestrator | 2026-04-05 02:16:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:25.492863 | orchestrator | 2026-04-05 02:16:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:25.492913 | orchestrator | 2026-04-05 02:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:28.549779 | orchestrator | 2026-04-05 02:16:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:28.551437 | orchestrator | 2026-04-05 02:16:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:28.551531 | orchestrator | 2026-04-05 02:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:31.598292 | orchestrator | 2026-04-05 02:16:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:31.600356 | orchestrator | 2026-04-05 02:16:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:31.600506 | orchestrator | 2026-04-05 02:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:34.650405 | orchestrator | 2026-04-05 02:16:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:34.652579 | orchestrator | 2026-04-05 02:16:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:34.652686 | orchestrator | 2026-04-05 02:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:37.697126 | orchestrator | 2026-04-05 02:16:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:37.699788 | orchestrator | 2026-04-05 02:16:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:37.699848 | orchestrator | 2026-04-05 02:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:40.748801 | orchestrator | 2026-04-05 02:16:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:40.750409 | orchestrator | 2026-04-05 02:16:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:40.750444 | orchestrator | 2026-04-05 02:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:43.802475 | orchestrator | 2026-04-05 02:16:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:43.806099 | orchestrator | 2026-04-05 02:16:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:43.806223 | orchestrator | 2026-04-05 02:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:46.862744 | orchestrator | 2026-04-05 02:16:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:46.866081 | orchestrator | 2026-04-05 02:16:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:46.866196 | orchestrator | 2026-04-05 02:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:49.916502 | orchestrator | 2026-04-05 02:16:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:49.918338 | orchestrator | 2026-04-05 02:16:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:49.918386 | orchestrator | 2026-04-05 02:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:52.966714 | orchestrator | 2026-04-05 02:16:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:52.970375 | orchestrator | 2026-04-05 02:16:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:52.970441 | orchestrator | 2026-04-05 02:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:56.021776 | orchestrator | 2026-04-05 02:16:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:56.026466 | orchestrator | 2026-04-05 02:16:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:56.026554 | orchestrator | 2026-04-05 02:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:16:59.084878 | orchestrator | 2026-04-05 02:16:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:16:59.088133 | orchestrator | 2026-04-05 02:16:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:16:59.088318 | orchestrator | 2026-04-05 02:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:02.143836 | orchestrator | 2026-04-05 02:17:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:02.144902 | orchestrator | 2026-04-05 02:17:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:02.145103 | orchestrator | 2026-04-05 02:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:05.203330 | orchestrator | 2026-04-05 02:17:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:05.204570 | orchestrator | 2026-04-05 02:17:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:05.204663 | orchestrator | 2026-04-05 02:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:08.251524 | orchestrator | 2026-04-05 02:17:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:08.253030 | orchestrator | 2026-04-05 02:17:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:08.253155 | orchestrator | 2026-04-05 02:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:11.305932 | orchestrator | 2026-04-05 02:17:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:11.307428 | orchestrator | 2026-04-05 02:17:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:11.307477 | orchestrator | 2026-04-05 02:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:14.355304 | orchestrator | 2026-04-05 02:17:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:14.356015 | orchestrator | 2026-04-05 02:17:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:14.356088 | orchestrator | 2026-04-05 02:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:17.402915 | orchestrator | 2026-04-05 02:17:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:17.406390 | orchestrator | 2026-04-05 02:17:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:17.406524 | orchestrator | 2026-04-05 02:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:20.463373 | orchestrator | 2026-04-05 02:17:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:20.465251 | orchestrator | 2026-04-05 02:17:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:20.465307 | orchestrator | 2026-04-05 02:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:23.519692 | orchestrator | 2026-04-05 02:17:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:23.524153 | orchestrator | 2026-04-05 02:17:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:23.524327 | orchestrator | 2026-04-05 02:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:26.571675 | orchestrator | 2026-04-05 02:17:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:26.573467 | orchestrator | 2026-04-05 02:17:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:26.573507 | orchestrator | 2026-04-05 02:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:29.621683 | orchestrator | 2026-04-05 02:17:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:29.624023 | orchestrator | 2026-04-05 02:17:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:29.624082 | orchestrator | 2026-04-05 02:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:32.674165 | orchestrator | 2026-04-05 02:17:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:32.675828 | orchestrator | 2026-04-05 02:17:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:32.675856 | orchestrator | 2026-04-05 02:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:35.734235 | orchestrator | 2026-04-05 02:17:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:35.735383 | orchestrator | 2026-04-05 02:17:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:35.735522 | orchestrator | 2026-04-05 02:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:38.784733 | orchestrator | 2026-04-05 02:17:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:38.786792 | orchestrator | 2026-04-05 02:17:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:38.786844 | orchestrator | 2026-04-05 02:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:41.835064 | orchestrator | 2026-04-05 02:17:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:41.837936 | orchestrator | 2026-04-05 02:17:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:41.838079 | orchestrator | 2026-04-05 02:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:44.892061 | orchestrator | 2026-04-05 02:17:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:44.893494 | orchestrator | 2026-04-05 02:17:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:44.893615 | orchestrator | 2026-04-05 02:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:47.944051 | orchestrator | 2026-04-05 02:17:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:47.945623 | orchestrator | 2026-04-05 02:17:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:47.945668 | orchestrator | 2026-04-05 02:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:51.000690 | orchestrator | 2026-04-05 02:17:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:51.003365 | orchestrator | 2026-04-05 02:17:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:51.003425 | orchestrator | 2026-04-05 02:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:54.049076 | orchestrator | 2026-04-05 02:17:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:54.050934 | orchestrator | 2026-04-05 02:17:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:54.051006 | orchestrator | 2026-04-05 02:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:17:57.101299 | orchestrator | 2026-04-05 02:17:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:17:57.103502 | orchestrator | 2026-04-05 02:17:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:17:57.103566 | orchestrator | 2026-04-05 02:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:00.155691 | orchestrator | 2026-04-05 02:18:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:00.157198 | orchestrator | 2026-04-05 02:18:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:00.157318 | orchestrator | 2026-04-05 02:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:03.204131 | orchestrator | 2026-04-05 02:18:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:03.206330 | orchestrator | 2026-04-05 02:18:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:03.206385 | orchestrator | 2026-04-05 02:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:06.252337 | orchestrator | 2026-04-05 02:18:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:06.253419 | orchestrator | 2026-04-05 02:18:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:06.253446 | orchestrator | 2026-04-05 02:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:09.298093 | orchestrator | 2026-04-05 02:18:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:09.299706 | orchestrator | 2026-04-05 02:18:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:09.299764 | orchestrator | 2026-04-05 02:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:12.349580 | orchestrator | 2026-04-05 02:18:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:12.351819 | orchestrator | 2026-04-05 02:18:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:12.351885 | orchestrator | 2026-04-05 02:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:15.393495 | orchestrator | 2026-04-05 02:18:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:15.397887 | orchestrator | 2026-04-05 02:18:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:15.397936 | orchestrator | 2026-04-05 02:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:18.450743 | orchestrator | 2026-04-05 02:18:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:18.453465 | orchestrator | 2026-04-05 02:18:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:18.453517 | orchestrator | 2026-04-05 02:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:21.505479 | orchestrator | 2026-04-05 02:18:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:21.507306 | orchestrator | 2026-04-05 02:18:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:21.507421 | orchestrator | 2026-04-05 02:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:24.564148 | orchestrator | 2026-04-05 02:18:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:24.566877 | orchestrator | 2026-04-05 02:18:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:24.566954 | orchestrator | 2026-04-05 02:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:27.613994 | orchestrator | 2026-04-05 02:18:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:27.616538 | orchestrator | 2026-04-05 02:18:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:27.616600 | orchestrator | 2026-04-05 02:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:30.669200 | orchestrator | 2026-04-05 02:18:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:30.673016 | orchestrator | 2026-04-05 02:18:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:30.673086 | orchestrator | 2026-04-05 02:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:33.727591 | orchestrator | 2026-04-05 02:18:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:33.731349 | orchestrator | 2026-04-05 02:18:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:33.731416 | orchestrator | 2026-04-05 02:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:36.787141 | orchestrator | 2026-04-05 02:18:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:36.789001 | orchestrator | 2026-04-05 02:18:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:36.789049 | orchestrator | 2026-04-05 02:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:39.838741 | orchestrator | 2026-04-05 02:18:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:39.841724 | orchestrator | 2026-04-05 02:18:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:39.841780 | orchestrator | 2026-04-05 02:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:42.894626 | orchestrator | 2026-04-05 02:18:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:42.896700 | orchestrator | 2026-04-05 02:18:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:42.896808 | orchestrator | 2026-04-05 02:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:45.941745 | orchestrator | 2026-04-05 02:18:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:45.942933 | orchestrator | 2026-04-05 02:18:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:45.942980 | orchestrator | 2026-04-05 02:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:48.996163 | orchestrator | 2026-04-05 02:18:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:48.997725 | orchestrator | 2026-04-05 02:18:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:48.997879 | orchestrator | 2026-04-05 02:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:52.048131 | orchestrator | 2026-04-05 02:18:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:52.050083 | orchestrator | 2026-04-05 02:18:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:52.050160 | orchestrator | 2026-04-05 02:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:55.099477 | orchestrator | 2026-04-05 02:18:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:55.101723 | orchestrator | 2026-04-05 02:18:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:55.101879 | orchestrator | 2026-04-05 02:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:18:58.149246 | orchestrator | 2026-04-05 02:18:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:18:58.151731 | orchestrator | 2026-04-05 02:18:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:18:58.151767 | orchestrator | 2026-04-05 02:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:01.197657 | orchestrator | 2026-04-05 02:19:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:01.198977 | orchestrator | 2026-04-05 02:19:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:01.199032 | orchestrator | 2026-04-05 02:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:04.245879 | orchestrator | 2026-04-05 02:19:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:04.247837 | orchestrator | 2026-04-05 02:19:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:04.248005 | orchestrator | 2026-04-05 02:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:07.295798 | orchestrator | 2026-04-05 02:19:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:07.298177 | orchestrator | 2026-04-05 02:19:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:07.298239 | orchestrator | 2026-04-05 02:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:10.349678 | orchestrator | 2026-04-05 02:19:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:10.351545 | orchestrator | 2026-04-05 02:19:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:10.351578 | orchestrator | 2026-04-05 02:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:13.404932 | orchestrator | 2026-04-05 02:19:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:13.406722 | orchestrator | 2026-04-05 02:19:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:13.406812 | orchestrator | 2026-04-05 02:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:16.453508 | orchestrator | 2026-04-05 02:19:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:16.455213 | orchestrator | 2026-04-05 02:19:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:16.455258 | orchestrator | 2026-04-05 02:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:19.501626 | orchestrator | 2026-04-05 02:19:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:19.503986 | orchestrator | 2026-04-05 02:19:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:19.504036 | orchestrator | 2026-04-05 02:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:22.556467 | orchestrator | 2026-04-05 02:19:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:22.558161 | orchestrator | 2026-04-05 02:19:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:22.558484 | orchestrator | 2026-04-05 02:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:25.610469 | orchestrator | 2026-04-05 02:19:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:25.611921 | orchestrator | 2026-04-05 02:19:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:25.611966 | orchestrator | 2026-04-05 02:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:28.662334 | orchestrator | 2026-04-05 02:19:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:28.664506 | orchestrator | 2026-04-05 02:19:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:28.664557 | orchestrator | 2026-04-05 02:19:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:31.709593 | orchestrator | 2026-04-05 02:19:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:31.713504 | orchestrator | 2026-04-05 02:19:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:31.713565 | orchestrator | 2026-04-05 02:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:34.759555 | orchestrator | 2026-04-05 02:19:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:34.762611 | orchestrator | 2026-04-05 02:19:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:34.762893 | orchestrator | 2026-04-05 02:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:37.816804 | orchestrator | 2026-04-05 02:19:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:37.820234 | orchestrator | 2026-04-05 02:19:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:37.820385 | orchestrator | 2026-04-05 02:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:40.865828 | orchestrator | 2026-04-05 02:19:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:40.866741 | orchestrator | 2026-04-05 02:19:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:40.866776 | orchestrator | 2026-04-05 02:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:43.912184 | orchestrator | 2026-04-05 02:19:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:43.915046 | orchestrator | 2026-04-05 02:19:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:43.915128 | orchestrator | 2026-04-05 02:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:46.965056 | orchestrator | 2026-04-05 02:19:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:46.966258 | orchestrator | 2026-04-05 02:19:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:46.966289 | orchestrator | 2026-04-05 02:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:50.021027 | orchestrator | 2026-04-05 02:19:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:50.022474 | orchestrator | 2026-04-05 02:19:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:50.022522 | orchestrator | 2026-04-05 02:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:53.073969 | orchestrator | 2026-04-05 02:19:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:53.075015 | orchestrator | 2026-04-05 02:19:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:53.075047 | orchestrator | 2026-04-05 02:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:56.129736 | orchestrator | 2026-04-05 02:19:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:56.132752 | orchestrator | 2026-04-05 02:19:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:56.132791 | orchestrator | 2026-04-05 02:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:19:59.182494 | orchestrator | 2026-04-05 02:19:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:19:59.184448 | orchestrator | 2026-04-05 02:19:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:19:59.184690 | orchestrator | 2026-04-05 02:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:02.228216 | orchestrator | 2026-04-05 02:20:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:02.229982 | orchestrator | 2026-04-05 02:20:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:02.231126 | orchestrator | 2026-04-05 02:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:05.276286 | orchestrator | 2026-04-05 02:20:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:05.277810 | orchestrator | 2026-04-05 02:20:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:05.277875 | orchestrator | 2026-04-05 02:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:08.331425 | orchestrator | 2026-04-05 02:20:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:08.333458 | orchestrator | 2026-04-05 02:20:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:08.333505 | orchestrator | 2026-04-05 02:20:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:11.379796 | orchestrator | 2026-04-05 02:20:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:11.381429 | orchestrator | 2026-04-05 02:20:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:11.381482 | orchestrator | 2026-04-05 02:20:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:14.439412 | orchestrator | 2026-04-05 02:20:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:14.441356 | orchestrator | 2026-04-05 02:20:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:14.441410 | orchestrator | 2026-04-05 02:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:17.483999 | orchestrator | 2026-04-05 02:20:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:17.486732 | orchestrator | 2026-04-05 02:20:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:17.486796 | orchestrator | 2026-04-05 02:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:20.533700 | orchestrator | 2026-04-05 02:20:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:20.534744 | orchestrator | 2026-04-05 02:20:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:20.534784 | orchestrator | 2026-04-05 02:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:23.575535 | orchestrator | 2026-04-05 02:20:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:23.576435 | orchestrator | 2026-04-05 02:20:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:23.576488 | orchestrator | 2026-04-05 02:20:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:26.626455 | orchestrator | 2026-04-05 02:20:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:26.627677 | orchestrator | 2026-04-05 02:20:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:26.627718 | orchestrator | 2026-04-05 02:20:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:29.669916 | orchestrator | 2026-04-05 02:20:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:29.670953 | orchestrator | 2026-04-05 02:20:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:29.671225 | orchestrator | 2026-04-05 02:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:32.719896 | orchestrator | 2026-04-05 02:20:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:32.720675 | orchestrator | 2026-04-05 02:20:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:32.721065 | orchestrator | 2026-04-05 02:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:35.770321 | orchestrator | 2026-04-05 02:20:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:35.771967 | orchestrator | 2026-04-05 02:20:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:35.772030 | orchestrator | 2026-04-05 02:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:38.818593 | orchestrator | 2026-04-05 02:20:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:38.820850 | orchestrator | 2026-04-05 02:20:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:38.820934 | orchestrator | 2026-04-05 02:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:41.862623 | orchestrator | 2026-04-05 02:20:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:41.863752 | orchestrator | 2026-04-05 02:20:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:41.863816 | orchestrator | 2026-04-05 02:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:44.913424 | orchestrator | 2026-04-05 02:20:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:44.915658 | orchestrator | 2026-04-05 02:20:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:44.915711 | orchestrator | 2026-04-05 02:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:47.960971 | orchestrator | 2026-04-05 02:20:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:47.963940 | orchestrator | 2026-04-05 02:20:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:47.964515 | orchestrator | 2026-04-05 02:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:51.017911 | orchestrator | 2026-04-05 02:20:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:51.020225 | orchestrator | 2026-04-05 02:20:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:51.020273 | orchestrator | 2026-04-05 02:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:54.068467 | orchestrator | 2026-04-05 02:20:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:54.069937 | orchestrator | 2026-04-05 02:20:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:54.070092 | orchestrator | 2026-04-05 02:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:20:57.121063 | orchestrator | 2026-04-05 02:20:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:20:57.124731 | orchestrator | 2026-04-05 02:20:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:20:57.124845 | orchestrator | 2026-04-05 02:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:00.178193 | orchestrator | 2026-04-05 02:21:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:00.179094 | orchestrator | 2026-04-05 02:21:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:00.179143 | orchestrator | 2026-04-05 02:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:03.223164 | orchestrator | 2026-04-05 02:21:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:03.224554 | orchestrator | 2026-04-05 02:21:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:03.224698 | orchestrator | 2026-04-05 02:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:06.274594 | orchestrator | 2026-04-05 02:21:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:06.277085 | orchestrator | 2026-04-05 02:21:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:06.277152 | orchestrator | 2026-04-05 02:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:09.333880 | orchestrator | 2026-04-05 02:21:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:09.335790 | orchestrator | 2026-04-05 02:21:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:09.335845 | orchestrator | 2026-04-05 02:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:12.381982 | orchestrator | 2026-04-05 02:21:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:12.384857 | orchestrator | 2026-04-05 02:21:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:12.384948 | orchestrator | 2026-04-05 02:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:15.433928 | orchestrator | 2026-04-05 02:21:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:15.435827 | orchestrator | 2026-04-05 02:21:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:15.436560 | orchestrator | 2026-04-05 02:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:18.484440 | orchestrator | 2026-04-05 02:21:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:18.486411 | orchestrator | 2026-04-05 02:21:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:18.486510 | orchestrator | 2026-04-05 02:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:21.532678 | orchestrator | 2026-04-05 02:21:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:21.535224 | orchestrator | 2026-04-05 02:21:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:21.535306 | orchestrator | 2026-04-05 02:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:24.585700 | orchestrator | 2026-04-05 02:21:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:24.587872 | orchestrator | 2026-04-05 02:21:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:24.587925 | orchestrator | 2026-04-05 02:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:27.643889 | orchestrator | 2026-04-05 02:21:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:27.646231 | orchestrator | 2026-04-05 02:21:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:27.646318 | orchestrator | 2026-04-05 02:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:30.697227 | orchestrator | 2026-04-05 02:21:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:30.699395 | orchestrator | 2026-04-05 02:21:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:30.699469 | orchestrator | 2026-04-05 02:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:33.749654 | orchestrator | 2026-04-05 02:21:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:33.754087 | orchestrator | 2026-04-05 02:21:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:33.754135 | orchestrator | 2026-04-05 02:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:36.794429 | orchestrator | 2026-04-05 02:21:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:36.797147 | orchestrator | 2026-04-05 02:21:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:36.797603 | orchestrator | 2026-04-05 02:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:39.837612 | orchestrator | 2026-04-05 02:21:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:39.839085 | orchestrator | 2026-04-05 02:21:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:39.839129 | orchestrator | 2026-04-05 02:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:42.883004 | orchestrator | 2026-04-05 02:21:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:42.884572 | orchestrator | 2026-04-05 02:21:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:42.884910 | orchestrator | 2026-04-05 02:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:45.938316 | orchestrator | 2026-04-05 02:21:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:45.940522 | orchestrator | 2026-04-05 02:21:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:45.940585 | orchestrator | 2026-04-05 02:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:48.985720 | orchestrator | 2026-04-05 02:21:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:48.986792 | orchestrator | 2026-04-05 02:21:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:48.986841 | orchestrator | 2026-04-05 02:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:52.033532 | orchestrator | 2026-04-05 02:21:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:52.035344 | orchestrator | 2026-04-05 02:21:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:52.035441 | orchestrator | 2026-04-05 02:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:55.088120 | orchestrator | 2026-04-05 02:21:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:55.090500 | orchestrator | 2026-04-05 02:21:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:55.090622 | orchestrator | 2026-04-05 02:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:21:58.135224 | orchestrator | 2026-04-05 02:21:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:21:58.137343 | orchestrator | 2026-04-05 02:21:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:21:58.137537 | orchestrator | 2026-04-05 02:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:01.186757 | orchestrator | 2026-04-05 02:22:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:01.188460 | orchestrator | 2026-04-05 02:22:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:01.188551 | orchestrator | 2026-04-05 02:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:04.238260 | orchestrator | 2026-04-05 02:22:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:04.240060 | orchestrator | 2026-04-05 02:22:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:04.240323 | orchestrator | 2026-04-05 02:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:07.292496 | orchestrator | 2026-04-05 02:22:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:07.294288 | orchestrator | 2026-04-05 02:22:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:07.294347 | orchestrator | 2026-04-05 02:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:10.341922 | orchestrator | 2026-04-05 02:22:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:10.343933 | orchestrator | 2026-04-05 02:22:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:10.344292 | orchestrator | 2026-04-05 02:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:13.387126 | orchestrator | 2026-04-05 02:22:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:13.389018 | orchestrator | 2026-04-05 02:22:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:13.389060 | orchestrator | 2026-04-05 02:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:16.443144 | orchestrator | 2026-04-05 02:22:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:16.445114 | orchestrator | 2026-04-05 02:22:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:16.445191 | orchestrator | 2026-04-05 02:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:19.489922 | orchestrator | 2026-04-05 02:22:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:19.491370 | orchestrator | 2026-04-05 02:22:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:19.491889 | orchestrator | 2026-04-05 02:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:22.543120 | orchestrator | 2026-04-05 02:22:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:22.544586 | orchestrator | 2026-04-05 02:22:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:22.544634 | orchestrator | 2026-04-05 02:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:25.595601 | orchestrator | 2026-04-05 02:22:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:25.597379 | orchestrator | 2026-04-05 02:22:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:25.597466 | orchestrator | 2026-04-05 02:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:28.646519 | orchestrator | 2026-04-05 02:22:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:28.648723 | orchestrator | 2026-04-05 02:22:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:28.648762 | orchestrator | 2026-04-05 02:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:31.695706 | orchestrator | 2026-04-05 02:22:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:31.697271 | orchestrator | 2026-04-05 02:22:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:31.697355 | orchestrator | 2026-04-05 02:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:34.737117 | orchestrator | 2026-04-05 02:22:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:34.738817 | orchestrator | 2026-04-05 02:22:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:34.738882 | orchestrator | 2026-04-05 02:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:37.791681 | orchestrator | 2026-04-05 02:22:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:37.792500 | orchestrator | 2026-04-05 02:22:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:37.792557 | orchestrator | 2026-04-05 02:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:40.842369 | orchestrator | 2026-04-05 02:22:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:40.843753 | orchestrator | 2026-04-05 02:22:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:40.843986 | orchestrator | 2026-04-05 02:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:43.894867 | orchestrator | 2026-04-05 02:22:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:43.897802 | orchestrator | 2026-04-05 02:22:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:43.897849 | orchestrator | 2026-04-05 02:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:46.952094 | orchestrator | 2026-04-05 02:22:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:46.955047 | orchestrator | 2026-04-05 02:22:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:46.955205 | orchestrator | 2026-04-05 02:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:50.004848 | orchestrator | 2026-04-05 02:22:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:50.006074 | orchestrator | 2026-04-05 02:22:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:50.006121 | orchestrator | 2026-04-05 02:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:53.051040 | orchestrator | 2026-04-05 02:22:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:53.052723 | orchestrator | 2026-04-05 02:22:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:53.052767 | orchestrator | 2026-04-05 02:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:56.100404 | orchestrator | 2026-04-05 02:22:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:56.101517 | orchestrator | 2026-04-05 02:22:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:56.101657 | orchestrator | 2026-04-05 02:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:22:59.149929 | orchestrator | 2026-04-05 02:22:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:22:59.150955 | orchestrator | 2026-04-05 02:22:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:22:59.151055 | orchestrator | 2026-04-05 02:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:02.205342 | orchestrator | 2026-04-05 02:23:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:02.207729 | orchestrator | 2026-04-05 02:23:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:02.207811 | orchestrator | 2026-04-05 02:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:05.257203 | orchestrator | 2026-04-05 02:23:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:05.259111 | orchestrator | 2026-04-05 02:23:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:05.259146 | orchestrator | 2026-04-05 02:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:08.312269 | orchestrator | 2026-04-05 02:23:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:08.313860 | orchestrator | 2026-04-05 02:23:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:08.313908 | orchestrator | 2026-04-05 02:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:11.363513 | orchestrator | 2026-04-05 02:23:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:11.364929 | orchestrator | 2026-04-05 02:23:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:11.365180 | orchestrator | 2026-04-05 02:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:14.415613 | orchestrator | 2026-04-05 02:23:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:14.416692 | orchestrator | 2026-04-05 02:23:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:14.416803 | orchestrator | 2026-04-05 02:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:17.468570 | orchestrator | 2026-04-05 02:23:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:17.470433 | orchestrator | 2026-04-05 02:23:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:17.470664 | orchestrator | 2026-04-05 02:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:20.517911 | orchestrator | 2026-04-05 02:23:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:20.520750 | orchestrator | 2026-04-05 02:23:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:20.520809 | orchestrator | 2026-04-05 02:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:23.576621 | orchestrator | 2026-04-05 02:23:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:23.578850 | orchestrator | 2026-04-05 02:23:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:23.578893 | orchestrator | 2026-04-05 02:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:26.622805 | orchestrator | 2026-04-05 02:23:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:26.624863 | orchestrator | 2026-04-05 02:23:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:26.624947 | orchestrator | 2026-04-05 02:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:29.672919 | orchestrator | 2026-04-05 02:23:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:29.674952 | orchestrator | 2026-04-05 02:23:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:29.674998 | orchestrator | 2026-04-05 02:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:32.721316 | orchestrator | 2026-04-05 02:23:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:32.722918 | orchestrator | 2026-04-05 02:23:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:32.722972 | orchestrator | 2026-04-05 02:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:35.768115 | orchestrator | 2026-04-05 02:23:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:35.769089 | orchestrator | 2026-04-05 02:23:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:35.769319 | orchestrator | 2026-04-05 02:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:38.816603 | orchestrator | 2026-04-05 02:23:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:38.818008 | orchestrator | 2026-04-05 02:23:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:38.818225 | orchestrator | 2026-04-05 02:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:41.866796 | orchestrator | 2026-04-05 02:23:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:41.868352 | orchestrator | 2026-04-05 02:23:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:41.868398 | orchestrator | 2026-04-05 02:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:44.920705 | orchestrator | 2026-04-05 02:23:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:44.923523 | orchestrator | 2026-04-05 02:23:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:44.923611 | orchestrator | 2026-04-05 02:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:47.973781 | orchestrator | 2026-04-05 02:23:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:47.975234 | orchestrator | 2026-04-05 02:23:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:47.975263 | orchestrator | 2026-04-05 02:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:51.029172 | orchestrator | 2026-04-05 02:23:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:51.030526 | orchestrator | 2026-04-05 02:23:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:51.030620 | orchestrator | 2026-04-05 02:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:54.081390 | orchestrator | 2026-04-05 02:23:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:54.082218 | orchestrator | 2026-04-05 02:23:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:54.082277 | orchestrator | 2026-04-05 02:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:23:57.132650 | orchestrator | 2026-04-05 02:23:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:23:57.134795 | orchestrator | 2026-04-05 02:23:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:23:57.134866 | orchestrator | 2026-04-05 02:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:24:00.186262 | orchestrator | 2026-04-05 02:24:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:24:00.186518 | orchestrator | 2026-04-05 02:24:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:24:00.186544 | orchestrator | 2026-04-05 02:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:24:03.233443 | orchestrator | 2026-04-05 02:24:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:24:03.234196 | orchestrator | 2026-04-05 02:24:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:24:03.234278 | orchestrator | 2026-04-05 02:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:24:06.286656 | orchestrator | 2026-04-05 02:24:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:06.381025 | orchestrator | 2026-04-05 02:26:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:06.381131 | orchestrator | 2026-04-05 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:09.426374 | orchestrator | 2026-04-05 02:26:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:09.429262 | orchestrator | 2026-04-05 02:26:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:09.429320 | orchestrator | 2026-04-05 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:12.476372 | orchestrator | 2026-04-05 02:26:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:12.478572 | orchestrator | 2026-04-05 02:26:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:12.478648 | orchestrator | 2026-04-05 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:15.521739 | orchestrator | 2026-04-05 02:26:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:15.523695 | orchestrator | 2026-04-05 02:26:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:15.523715 | orchestrator | 2026-04-05 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:18.578328 | orchestrator | 2026-04-05 02:26:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:18.580625 | orchestrator | 2026-04-05 02:26:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:18.580678 | orchestrator | 2026-04-05 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:21.631376 | orchestrator | 2026-04-05 02:26:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:21.631634 | orchestrator | 2026-04-05 02:26:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:21.631660 | orchestrator | 2026-04-05 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:24.676220 | orchestrator | 2026-04-05 02:26:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:24.677778 | orchestrator | 2026-04-05 02:26:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:24.677812 | orchestrator | 2026-04-05 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:27.717164 | orchestrator | 2026-04-05 02:26:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:27.719585 | orchestrator | 2026-04-05 02:26:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:27.719688 | orchestrator | 2026-04-05 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:30.762787 | orchestrator | 2026-04-05 02:26:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:30.764843 | orchestrator | 2026-04-05 02:26:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:30.764926 | orchestrator | 2026-04-05 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:33.802666 | orchestrator | 2026-04-05 02:26:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:33.804097 | orchestrator | 2026-04-05 02:26:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:33.804148 | orchestrator | 2026-04-05 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:36.854292 | orchestrator | 2026-04-05 02:26:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:36.856324 | orchestrator | 2026-04-05 02:26:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:36.856452 | orchestrator | 2026-04-05 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:39.901616 | orchestrator | 2026-04-05 02:26:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:39.903722 | orchestrator | 2026-04-05 02:26:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:39.903801 | orchestrator | 2026-04-05 02:26:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:42.944427 | orchestrator | 2026-04-05 02:26:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:42.946586 | orchestrator | 2026-04-05 02:26:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:42.946619 | orchestrator | 2026-04-05 02:26:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:45.981679 | orchestrator | 2026-04-05 02:26:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:45.982993 | orchestrator | 2026-04-05 02:26:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:45.983050 | orchestrator | 2026-04-05 02:26:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:49.035122 | orchestrator | 2026-04-05 02:26:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:49.037019 | orchestrator | 2026-04-05 02:26:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:49.037137 | orchestrator | 2026-04-05 02:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:52.088654 | orchestrator | 2026-04-05 02:26:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:52.091265 | orchestrator | 2026-04-05 02:26:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:52.091334 | orchestrator | 2026-04-05 02:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:55.138662 | orchestrator | 2026-04-05 02:26:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:55.139305 | orchestrator | 2026-04-05 02:26:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:55.139327 | orchestrator | 2026-04-05 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:26:58.182942 | orchestrator | 2026-04-05 02:26:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:26:58.184826 | orchestrator | 2026-04-05 02:26:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:26:58.184902 | orchestrator | 2026-04-05 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:01.237882 | orchestrator | 2026-04-05 02:27:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:01.240480 | orchestrator | 2026-04-05 02:27:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:01.240611 | orchestrator | 2026-04-05 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:04.296105 | orchestrator | 2026-04-05 02:27:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:04.299069 | orchestrator | 2026-04-05 02:27:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:04.299155 | orchestrator | 2026-04-05 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:07.351450 | orchestrator | 2026-04-05 02:27:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:07.351720 | orchestrator | 2026-04-05 02:27:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:07.351776 | orchestrator | 2026-04-05 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:10.399764 | orchestrator | 2026-04-05 02:27:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:10.400048 | orchestrator | 2026-04-05 02:27:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:10.400071 | orchestrator | 2026-04-05 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:13.451649 | orchestrator | 2026-04-05 02:27:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:13.453700 | orchestrator | 2026-04-05 02:27:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:13.453747 | orchestrator | 2026-04-05 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:16.494419 | orchestrator | 2026-04-05 02:27:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:16.496815 | orchestrator | 2026-04-05 02:27:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:16.496868 | orchestrator | 2026-04-05 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:19.548843 | orchestrator | 2026-04-05 02:27:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:19.550401 | orchestrator | 2026-04-05 02:27:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:19.550465 | orchestrator | 2026-04-05 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:22.596315 | orchestrator | 2026-04-05 02:27:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:22.597989 | orchestrator | 2026-04-05 02:27:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:22.598195 | orchestrator | 2026-04-05 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:25.637893 | orchestrator | 2026-04-05 02:27:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:25.638719 | orchestrator | 2026-04-05 02:27:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:25.638756 | orchestrator | 2026-04-05 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:28.682094 | orchestrator | 2026-04-05 02:27:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:28.684472 | orchestrator | 2026-04-05 02:27:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:28.684608 | orchestrator | 2026-04-05 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:31.733122 | orchestrator | 2026-04-05 02:27:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:31.735921 | orchestrator | 2026-04-05 02:27:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:31.735980 | orchestrator | 2026-04-05 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:34.781050 | orchestrator | 2026-04-05 02:27:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:34.781515 | orchestrator | 2026-04-05 02:27:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:34.781675 | orchestrator | 2026-04-05 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:37.830223 | orchestrator | 2026-04-05 02:27:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:37.831030 | orchestrator | 2026-04-05 02:27:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:37.831048 | orchestrator | 2026-04-05 02:27:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:40.875274 | orchestrator | 2026-04-05 02:27:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:40.876174 | orchestrator | 2026-04-05 02:27:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:40.876260 | orchestrator | 2026-04-05 02:27:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:43.917493 | orchestrator | 2026-04-05 02:27:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:43.919355 | orchestrator | 2026-04-05 02:27:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:43.919482 | orchestrator | 2026-04-05 02:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:46.966453 | orchestrator | 2026-04-05 02:27:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:46.969361 | orchestrator | 2026-04-05 02:27:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:46.969431 | orchestrator | 2026-04-05 02:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:50.026758 | orchestrator | 2026-04-05 02:27:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:50.030424 | orchestrator | 2026-04-05 02:27:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:50.030495 | orchestrator | 2026-04-05 02:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:53.074285 | orchestrator | 2026-04-05 02:27:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:53.074862 | orchestrator | 2026-04-05 02:27:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:53.074947 | orchestrator | 2026-04-05 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:56.121057 | orchestrator | 2026-04-05 02:27:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:56.123832 | orchestrator | 2026-04-05 02:27:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:56.123899 | orchestrator | 2026-04-05 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:27:59.169283 | orchestrator | 2026-04-05 02:27:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:27:59.171289 | orchestrator | 2026-04-05 02:27:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:27:59.171332 | orchestrator | 2026-04-05 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:02.217977 | orchestrator | 2026-04-05 02:28:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:02.219178 | orchestrator | 2026-04-05 02:28:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:02.219233 | orchestrator | 2026-04-05 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:05.265693 | orchestrator | 2026-04-05 02:28:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:05.267960 | orchestrator | 2026-04-05 02:28:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:05.268013 | orchestrator | 2026-04-05 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:08.316401 | orchestrator | 2026-04-05 02:28:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:08.318430 | orchestrator | 2026-04-05 02:28:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:08.318479 | orchestrator | 2026-04-05 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:11.363837 | orchestrator | 2026-04-05 02:28:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:11.366277 | orchestrator | 2026-04-05 02:28:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:11.366340 | orchestrator | 2026-04-05 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:14.410527 | orchestrator | 2026-04-05 02:28:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:14.413015 | orchestrator | 2026-04-05 02:28:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:14.413073 | orchestrator | 2026-04-05 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:17.455852 | orchestrator | 2026-04-05 02:28:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:17.457677 | orchestrator | 2026-04-05 02:28:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:17.457737 | orchestrator | 2026-04-05 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:20.506925 | orchestrator | 2026-04-05 02:28:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:20.508299 | orchestrator | 2026-04-05 02:28:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:20.508346 | orchestrator | 2026-04-05 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:23.551785 | orchestrator | 2026-04-05 02:28:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:23.553857 | orchestrator | 2026-04-05 02:28:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:23.553920 | orchestrator | 2026-04-05 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:26.607538 | orchestrator | 2026-04-05 02:28:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:26.609225 | orchestrator | 2026-04-05 02:28:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:26.609524 | orchestrator | 2026-04-05 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:29.657643 | orchestrator | 2026-04-05 02:28:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:29.659876 | orchestrator | 2026-04-05 02:28:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:29.659980 | orchestrator | 2026-04-05 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:32.705809 | orchestrator | 2026-04-05 02:28:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:32.706910 | orchestrator | 2026-04-05 02:28:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:32.706969 | orchestrator | 2026-04-05 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:35.756185 | orchestrator | 2026-04-05 02:28:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:35.759355 | orchestrator | 2026-04-05 02:28:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:35.759492 | orchestrator | 2026-04-05 02:28:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:38.799924 | orchestrator | 2026-04-05 02:28:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:38.800750 | orchestrator | 2026-04-05 02:28:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:38.800814 | orchestrator | 2026-04-05 02:28:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:41.841987 | orchestrator | 2026-04-05 02:28:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:41.843411 | orchestrator | 2026-04-05 02:28:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:41.843440 | orchestrator | 2026-04-05 02:28:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:44.878931 | orchestrator | 2026-04-05 02:28:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:44.880248 | orchestrator | 2026-04-05 02:28:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:44.880336 | orchestrator | 2026-04-05 02:28:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:47.920502 | orchestrator | 2026-04-05 02:28:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:47.922222 | orchestrator | 2026-04-05 02:28:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:47.922324 | orchestrator | 2026-04-05 02:28:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:50.983520 | orchestrator | 2026-04-05 02:28:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:50.984786 | orchestrator | 2026-04-05 02:28:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:50.984869 | orchestrator | 2026-04-05 02:28:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:54.032948 | orchestrator | 2026-04-05 02:28:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:54.034193 | orchestrator | 2026-04-05 02:28:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:54.034244 | orchestrator | 2026-04-05 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:28:57.069097 | orchestrator | 2026-04-05 02:28:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:28:57.070355 | orchestrator | 2026-04-05 02:28:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:28:57.070686 | orchestrator | 2026-04-05 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:00.112816 | orchestrator | 2026-04-05 02:29:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:00.114645 | orchestrator | 2026-04-05 02:29:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:00.114796 | orchestrator | 2026-04-05 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:03.168666 | orchestrator | 2026-04-05 02:29:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:03.170415 | orchestrator | 2026-04-05 02:29:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:03.170991 | orchestrator | 2026-04-05 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:06.217100 | orchestrator | 2026-04-05 02:29:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:06.219072 | orchestrator | 2026-04-05 02:29:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:06.219142 | orchestrator | 2026-04-05 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:09.261172 | orchestrator | 2026-04-05 02:29:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:09.263835 | orchestrator | 2026-04-05 02:29:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:09.263925 | orchestrator | 2026-04-05 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:12.306498 | orchestrator | 2026-04-05 02:29:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:12.307859 | orchestrator | 2026-04-05 02:29:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:12.307909 | orchestrator | 2026-04-05 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:15.354783 | orchestrator | 2026-04-05 02:29:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:15.355804 | orchestrator | 2026-04-05 02:29:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:15.355877 | orchestrator | 2026-04-05 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:18.399307 | orchestrator | 2026-04-05 02:29:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:18.401517 | orchestrator | 2026-04-05 02:29:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:18.401642 | orchestrator | 2026-04-05 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:21.450986 | orchestrator | 2026-04-05 02:29:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:21.452608 | orchestrator | 2026-04-05 02:29:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:21.452865 | orchestrator | 2026-04-05 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:24.505854 | orchestrator | 2026-04-05 02:29:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:24.507387 | orchestrator | 2026-04-05 02:29:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:24.507424 | orchestrator | 2026-04-05 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:27.554473 | orchestrator | 2026-04-05 02:29:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:27.556824 | orchestrator | 2026-04-05 02:29:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:27.556909 | orchestrator | 2026-04-05 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:30.601665 | orchestrator | 2026-04-05 02:29:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:30.602144 | orchestrator | 2026-04-05 02:29:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:30.602170 | orchestrator | 2026-04-05 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:33.651836 | orchestrator | 2026-04-05 02:29:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:33.653785 | orchestrator | 2026-04-05 02:29:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:33.653858 | orchestrator | 2026-04-05 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:36.703975 | orchestrator | 2026-04-05 02:29:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:36.707664 | orchestrator | 2026-04-05 02:29:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:36.707720 | orchestrator | 2026-04-05 02:29:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:39.753170 | orchestrator | 2026-04-05 02:29:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:39.755042 | orchestrator | 2026-04-05 02:29:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:39.755142 | orchestrator | 2026-04-05 02:29:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:42.803163 | orchestrator | 2026-04-05 02:29:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:42.804883 | orchestrator | 2026-04-05 02:29:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:42.805074 | orchestrator | 2026-04-05 02:29:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:45.849116 | orchestrator | 2026-04-05 02:29:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:45.851236 | orchestrator | 2026-04-05 02:29:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:45.851279 | orchestrator | 2026-04-05 02:29:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:48.898523 | orchestrator | 2026-04-05 02:29:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:48.900525 | orchestrator | 2026-04-05 02:29:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:48.900571 | orchestrator | 2026-04-05 02:29:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:51.951830 | orchestrator | 2026-04-05 02:29:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:51.953257 | orchestrator | 2026-04-05 02:29:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:51.953424 | orchestrator | 2026-04-05 02:29:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:55.002247 | orchestrator | 2026-04-05 02:29:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:55.004286 | orchestrator | 2026-04-05 02:29:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:55.004350 | orchestrator | 2026-04-05 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:29:58.052667 | orchestrator | 2026-04-05 02:29:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:29:58.054979 | orchestrator | 2026-04-05 02:29:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:29:58.055002 | orchestrator | 2026-04-05 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:01.096828 | orchestrator | 2026-04-05 02:30:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:01.099548 | orchestrator | 2026-04-05 02:30:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:01.099679 | orchestrator | 2026-04-05 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:04.147091 | orchestrator | 2026-04-05 02:30:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:04.152653 | orchestrator | 2026-04-05 02:30:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:04.152777 | orchestrator | 2026-04-05 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:07.192844 | orchestrator | 2026-04-05 02:30:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:07.195036 | orchestrator | 2026-04-05 02:30:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:07.195086 | orchestrator | 2026-04-05 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:10.242416 | orchestrator | 2026-04-05 02:30:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:10.245808 | orchestrator | 2026-04-05 02:30:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:10.245893 | orchestrator | 2026-04-05 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:13.286326 | orchestrator | 2026-04-05 02:30:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:13.287520 | orchestrator | 2026-04-05 02:30:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:13.287564 | orchestrator | 2026-04-05 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:16.338490 | orchestrator | 2026-04-05 02:30:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:16.341078 | orchestrator | 2026-04-05 02:30:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:16.341132 | orchestrator | 2026-04-05 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:19.389014 | orchestrator | 2026-04-05 02:30:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:19.391771 | orchestrator | 2026-04-05 02:30:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:19.391854 | orchestrator | 2026-04-05 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:22.450251 | orchestrator | 2026-04-05 02:30:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:22.452261 | orchestrator | 2026-04-05 02:30:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:22.452285 | orchestrator | 2026-04-05 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:25.497426 | orchestrator | 2026-04-05 02:30:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:25.499322 | orchestrator | 2026-04-05 02:30:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:25.499389 | orchestrator | 2026-04-05 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:28.548668 | orchestrator | 2026-04-05 02:30:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:28.549497 | orchestrator | 2026-04-05 02:30:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:28.549543 | orchestrator | 2026-04-05 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:31.594532 | orchestrator | 2026-04-05 02:30:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:31.595728 | orchestrator | 2026-04-05 02:30:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:31.595943 | orchestrator | 2026-04-05 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:34.641324 | orchestrator | 2026-04-05 02:30:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:34.643400 | orchestrator | 2026-04-05 02:30:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:34.643493 | orchestrator | 2026-04-05 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:37.698307 | orchestrator | 2026-04-05 02:30:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:37.699983 | orchestrator | 2026-04-05 02:30:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:37.700133 | orchestrator | 2026-04-05 02:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:40.750870 | orchestrator | 2026-04-05 02:30:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:40.753074 | orchestrator | 2026-04-05 02:30:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:40.753120 | orchestrator | 2026-04-05 02:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:43.801504 | orchestrator | 2026-04-05 02:30:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:43.802952 | orchestrator | 2026-04-05 02:30:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:43.803319 | orchestrator | 2026-04-05 02:30:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:46.859893 | orchestrator | 2026-04-05 02:30:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:46.862174 | orchestrator | 2026-04-05 02:30:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:46.862248 | orchestrator | 2026-04-05 02:30:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:49.912980 | orchestrator | 2026-04-05 02:30:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:49.916793 | orchestrator | 2026-04-05 02:30:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:49.916956 | orchestrator | 2026-04-05 02:30:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:52.970317 | orchestrator | 2026-04-05 02:30:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:52.972985 | orchestrator | 2026-04-05 02:30:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:52.973032 | orchestrator | 2026-04-05 02:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:56.021391 | orchestrator | 2026-04-05 02:30:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:56.023055 | orchestrator | 2026-04-05 02:30:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:56.023128 | orchestrator | 2026-04-05 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:30:59.068787 | orchestrator | 2026-04-05 02:30:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:30:59.071561 | orchestrator | 2026-04-05 02:30:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:30:59.071589 | orchestrator | 2026-04-05 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:02.116292 | orchestrator | 2026-04-05 02:31:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:02.118731 | orchestrator | 2026-04-05 02:31:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:02.118808 | orchestrator | 2026-04-05 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:05.165045 | orchestrator | 2026-04-05 02:31:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:05.166694 | orchestrator | 2026-04-05 02:31:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:05.166743 | orchestrator | 2026-04-05 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:08.222101 | orchestrator | 2026-04-05 02:31:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:08.222888 | orchestrator | 2026-04-05 02:31:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:08.222920 | orchestrator | 2026-04-05 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:11.267365 | orchestrator | 2026-04-05 02:31:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:11.268549 | orchestrator | 2026-04-05 02:31:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:11.268610 | orchestrator | 2026-04-05 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:14.318159 | orchestrator | 2026-04-05 02:31:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:14.319449 | orchestrator | 2026-04-05 02:31:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:14.319504 | orchestrator | 2026-04-05 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:17.364214 | orchestrator | 2026-04-05 02:31:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:17.366796 | orchestrator | 2026-04-05 02:31:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:17.366853 | orchestrator | 2026-04-05 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:20.419616 | orchestrator | 2026-04-05 02:31:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:20.422089 | orchestrator | 2026-04-05 02:31:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:20.422157 | orchestrator | 2026-04-05 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:23.474448 | orchestrator | 2026-04-05 02:31:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:23.475385 | orchestrator | 2026-04-05 02:31:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:23.475431 | orchestrator | 2026-04-05 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:26.519605 | orchestrator | 2026-04-05 02:31:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:26.522961 | orchestrator | 2026-04-05 02:31:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:26.523057 | orchestrator | 2026-04-05 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:29.568252 | orchestrator | 2026-04-05 02:31:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:29.570181 | orchestrator | 2026-04-05 02:31:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:29.570226 | orchestrator | 2026-04-05 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:32.619342 | orchestrator | 2026-04-05 02:31:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:32.622536 | orchestrator | 2026-04-05 02:31:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:32.622614 | orchestrator | 2026-04-05 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:35.676586 | orchestrator | 2026-04-05 02:31:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:35.678917 | orchestrator | 2026-04-05 02:31:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:35.678964 | orchestrator | 2026-04-05 02:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:38.728247 | orchestrator | 2026-04-05 02:31:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:38.730170 | orchestrator | 2026-04-05 02:31:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:38.730266 | orchestrator | 2026-04-05 02:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:41.779038 | orchestrator | 2026-04-05 02:31:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:41.780398 | orchestrator | 2026-04-05 02:31:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:41.780473 | orchestrator | 2026-04-05 02:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:44.828597 | orchestrator | 2026-04-05 02:31:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:44.830751 | orchestrator | 2026-04-05 02:31:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:44.830802 | orchestrator | 2026-04-05 02:31:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:47.877662 | orchestrator | 2026-04-05 02:31:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:47.879943 | orchestrator | 2026-04-05 02:31:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:47.880006 | orchestrator | 2026-04-05 02:31:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:50.923519 | orchestrator | 2026-04-05 02:31:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:50.925241 | orchestrator | 2026-04-05 02:31:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:50.925259 | orchestrator | 2026-04-05 02:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:53.976326 | orchestrator | 2026-04-05 02:31:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:53.978979 | orchestrator | 2026-04-05 02:31:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:53.979081 | orchestrator | 2026-04-05 02:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:31:57.025080 | orchestrator | 2026-04-05 02:31:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:31:57.025913 | orchestrator | 2026-04-05 02:31:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:31:57.025957 | orchestrator | 2026-04-05 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:00.063463 | orchestrator | 2026-04-05 02:32:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:00.065166 | orchestrator | 2026-04-05 02:32:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:00.065210 | orchestrator | 2026-04-05 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:03.111451 | orchestrator | 2026-04-05 02:32:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:03.113076 | orchestrator | 2026-04-05 02:32:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:03.113177 | orchestrator | 2026-04-05 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:06.154269 | orchestrator | 2026-04-05 02:32:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:06.155938 | orchestrator | 2026-04-05 02:32:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:06.156019 | orchestrator | 2026-04-05 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:09.198367 | orchestrator | 2026-04-05 02:32:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:09.201571 | orchestrator | 2026-04-05 02:32:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:09.201630 | orchestrator | 2026-04-05 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:12.236577 | orchestrator | 2026-04-05 02:32:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:12.238086 | orchestrator | 2026-04-05 02:32:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:12.238112 | orchestrator | 2026-04-05 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:15.281240 | orchestrator | 2026-04-05 02:32:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:15.283292 | orchestrator | 2026-04-05 02:32:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:15.283353 | orchestrator | 2026-04-05 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:18.332402 | orchestrator | 2026-04-05 02:32:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:18.335048 | orchestrator | 2026-04-05 02:32:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:18.335129 | orchestrator | 2026-04-05 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:21.383461 | orchestrator | 2026-04-05 02:32:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:21.385134 | orchestrator | 2026-04-05 02:32:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:21.385461 | orchestrator | 2026-04-05 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:24.426491 | orchestrator | 2026-04-05 02:32:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:24.429841 | orchestrator | 2026-04-05 02:32:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:24.429924 | orchestrator | 2026-04-05 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:27.470706 | orchestrator | 2026-04-05 02:32:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:27.473651 | orchestrator | 2026-04-05 02:32:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:27.473692 | orchestrator | 2026-04-05 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:30.515099 | orchestrator | 2026-04-05 02:32:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:30.515891 | orchestrator | 2026-04-05 02:32:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:30.516022 | orchestrator | 2026-04-05 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:33.561953 | orchestrator | 2026-04-05 02:32:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:33.563354 | orchestrator | 2026-04-05 02:32:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:33.563517 | orchestrator | 2026-04-05 02:32:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:36.610615 | orchestrator | 2026-04-05 02:32:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:36.612604 | orchestrator | 2026-04-05 02:32:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:36.612736 | orchestrator | 2026-04-05 02:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:39.646300 | orchestrator | 2026-04-05 02:32:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:39.648490 | orchestrator | 2026-04-05 02:32:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:39.648540 | orchestrator | 2026-04-05 02:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:42.691516 | orchestrator | 2026-04-05 02:32:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:42.692889 | orchestrator | 2026-04-05 02:32:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:42.692926 | orchestrator | 2026-04-05 02:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:45.739625 | orchestrator | 2026-04-05 02:32:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:45.741801 | orchestrator | 2026-04-05 02:32:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:45.741866 | orchestrator | 2026-04-05 02:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:48.790940 | orchestrator | 2026-04-05 02:32:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:48.794007 | orchestrator | 2026-04-05 02:32:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:48.794200 | orchestrator | 2026-04-05 02:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:51.849941 | orchestrator | 2026-04-05 02:32:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:51.850989 | orchestrator | 2026-04-05 02:32:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:51.851455 | orchestrator | 2026-04-05 02:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:54.890294 | orchestrator | 2026-04-05 02:32:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:54.892422 | orchestrator | 2026-04-05 02:32:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:54.892494 | orchestrator | 2026-04-05 02:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:32:57.936837 | orchestrator | 2026-04-05 02:32:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:32:57.937591 | orchestrator | 2026-04-05 02:32:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:32:57.937624 | orchestrator | 2026-04-05 02:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:00.982011 | orchestrator | 2026-04-05 02:33:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:00.983475 | orchestrator | 2026-04-05 02:33:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:00.983550 | orchestrator | 2026-04-05 02:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:04.037385 | orchestrator | 2026-04-05 02:33:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:04.038205 | orchestrator | 2026-04-05 02:33:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:04.038438 | orchestrator | 2026-04-05 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:07.092307 | orchestrator | 2026-04-05 02:33:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:07.093830 | orchestrator | 2026-04-05 02:33:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:07.093872 | orchestrator | 2026-04-05 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:10.141540 | orchestrator | 2026-04-05 02:33:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:10.142956 | orchestrator | 2026-04-05 02:33:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:10.142985 | orchestrator | 2026-04-05 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:13.198506 | orchestrator | 2026-04-05 02:33:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:13.202332 | orchestrator | 2026-04-05 02:33:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:13.202414 | orchestrator | 2026-04-05 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:16.259044 | orchestrator | 2026-04-05 02:33:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:16.260752 | orchestrator | 2026-04-05 02:33:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:16.260809 | orchestrator | 2026-04-05 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:19.321402 | orchestrator | 2026-04-05 02:33:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:19.323893 | orchestrator | 2026-04-05 02:33:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:19.323966 | orchestrator | 2026-04-05 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:22.369627 | orchestrator | 2026-04-05 02:33:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:22.372647 | orchestrator | 2026-04-05 02:33:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:22.372740 | orchestrator | 2026-04-05 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:25.427302 | orchestrator | 2026-04-05 02:33:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:25.430882 | orchestrator | 2026-04-05 02:33:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:25.430970 | orchestrator | 2026-04-05 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:28.484727 | orchestrator | 2026-04-05 02:33:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:28.489649 | orchestrator | 2026-04-05 02:33:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:28.489803 | orchestrator | 2026-04-05 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:31.534286 | orchestrator | 2026-04-05 02:33:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:31.535876 | orchestrator | 2026-04-05 02:33:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:31.535971 | orchestrator | 2026-04-05 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:34.582889 | orchestrator | 2026-04-05 02:33:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:34.585308 | orchestrator | 2026-04-05 02:33:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:34.585394 | orchestrator | 2026-04-05 02:33:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:37.631981 | orchestrator | 2026-04-05 02:33:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:37.636010 | orchestrator | 2026-04-05 02:33:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:37.636080 | orchestrator | 2026-04-05 02:33:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:40.679729 | orchestrator | 2026-04-05 02:33:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:40.681483 | orchestrator | 2026-04-05 02:33:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:40.681526 | orchestrator | 2026-04-05 02:33:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:43.734404 | orchestrator | 2026-04-05 02:33:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:43.736437 | orchestrator | 2026-04-05 02:33:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:43.736537 | orchestrator | 2026-04-05 02:33:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:46.794137 | orchestrator | 2026-04-05 02:33:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:46.795983 | orchestrator | 2026-04-05 02:33:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:46.796079 | orchestrator | 2026-04-05 02:33:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:49.841571 | orchestrator | 2026-04-05 02:33:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:49.844283 | orchestrator | 2026-04-05 02:33:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:49.844326 | orchestrator | 2026-04-05 02:33:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:52.894654 | orchestrator | 2026-04-05 02:33:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:52.896608 | orchestrator | 2026-04-05 02:33:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:52.896710 | orchestrator | 2026-04-05 02:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:55.947057 | orchestrator | 2026-04-05 02:33:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:55.950189 | orchestrator | 2026-04-05 02:33:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:55.950528 | orchestrator | 2026-04-05 02:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:33:59.008107 | orchestrator | 2026-04-05 02:33:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:33:59.010102 | orchestrator | 2026-04-05 02:33:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:33:59.010161 | orchestrator | 2026-04-05 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:02.066275 | orchestrator | 2026-04-05 02:34:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:02.069118 | orchestrator | 2026-04-05 02:34:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:02.069211 | orchestrator | 2026-04-05 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:05.123430 | orchestrator | 2026-04-05 02:34:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:05.127001 | orchestrator | 2026-04-05 02:34:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:05.127125 | orchestrator | 2026-04-05 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:08.178826 | orchestrator | 2026-04-05 02:34:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:08.180967 | orchestrator | 2026-04-05 02:34:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:08.181021 | orchestrator | 2026-04-05 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:11.229654 | orchestrator | 2026-04-05 02:34:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:11.230951 | orchestrator | 2026-04-05 02:34:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:11.231031 | orchestrator | 2026-04-05 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:14.285127 | orchestrator | 2026-04-05 02:34:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:14.287738 | orchestrator | 2026-04-05 02:34:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:14.287819 | orchestrator | 2026-04-05 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:17.337598 | orchestrator | 2026-04-05 02:34:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:17.339484 | orchestrator | 2026-04-05 02:34:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:17.339545 | orchestrator | 2026-04-05 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:20.390417 | orchestrator | 2026-04-05 02:34:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:20.392274 | orchestrator | 2026-04-05 02:34:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:20.392317 | orchestrator | 2026-04-05 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:23.447435 | orchestrator | 2026-04-05 02:34:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:23.450138 | orchestrator | 2026-04-05 02:34:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:23.450247 | orchestrator | 2026-04-05 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:26.503308 | orchestrator | 2026-04-05 02:34:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:26.505170 | orchestrator | 2026-04-05 02:34:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:26.505209 | orchestrator | 2026-04-05 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:29.556231 | orchestrator | 2026-04-05 02:34:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:29.558307 | orchestrator | 2026-04-05 02:34:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:29.558371 | orchestrator | 2026-04-05 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:32.601816 | orchestrator | 2026-04-05 02:34:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:32.602743 | orchestrator | 2026-04-05 02:34:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:32.602796 | orchestrator | 2026-04-05 02:34:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:35.651214 | orchestrator | 2026-04-05 02:34:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:35.653767 | orchestrator | 2026-04-05 02:34:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:35.653832 | orchestrator | 2026-04-05 02:34:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:38.708241 | orchestrator | 2026-04-05 02:34:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:38.710009 | orchestrator | 2026-04-05 02:34:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:38.710109 | orchestrator | 2026-04-05 02:34:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:41.765287 | orchestrator | 2026-04-05 02:34:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:41.767129 | orchestrator | 2026-04-05 02:34:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:41.767190 | orchestrator | 2026-04-05 02:34:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:44.824273 | orchestrator | 2026-04-05 02:34:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:44.826268 | orchestrator | 2026-04-05 02:34:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:44.826453 | orchestrator | 2026-04-05 02:34:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:47.877305 | orchestrator | 2026-04-05 02:34:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:47.880036 | orchestrator | 2026-04-05 02:34:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:47.880155 | orchestrator | 2026-04-05 02:34:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:50.927386 | orchestrator | 2026-04-05 02:34:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:50.929394 | orchestrator | 2026-04-05 02:34:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:50.929437 | orchestrator | 2026-04-05 02:34:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:53.984926 | orchestrator | 2026-04-05 02:34:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:53.988480 | orchestrator | 2026-04-05 02:34:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:53.988538 | orchestrator | 2026-04-05 02:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:34:57.038959 | orchestrator | 2026-04-05 02:34:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:34:57.040924 | orchestrator | 2026-04-05 02:34:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:34:57.040990 | orchestrator | 2026-04-05 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:00.095311 | orchestrator | 2026-04-05 02:35:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:00.097989 | orchestrator | 2026-04-05 02:35:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:00.098148 | orchestrator | 2026-04-05 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:03.155755 | orchestrator | 2026-04-05 02:35:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:03.156790 | orchestrator | 2026-04-05 02:35:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:03.156838 | orchestrator | 2026-04-05 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:06.217302 | orchestrator | 2026-04-05 02:35:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:06.219974 | orchestrator | 2026-04-05 02:35:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:06.220049 | orchestrator | 2026-04-05 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:09.273176 | orchestrator | 2026-04-05 02:35:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:09.275214 | orchestrator | 2026-04-05 02:35:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:09.275246 | orchestrator | 2026-04-05 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:12.325495 | orchestrator | 2026-04-05 02:35:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:12.327823 | orchestrator | 2026-04-05 02:35:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:12.327920 | orchestrator | 2026-04-05 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:15.378373 | orchestrator | 2026-04-05 02:35:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:15.380107 | orchestrator | 2026-04-05 02:35:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:15.380184 | orchestrator | 2026-04-05 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:18.424639 | orchestrator | 2026-04-05 02:35:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:18.427096 | orchestrator | 2026-04-05 02:35:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:18.427165 | orchestrator | 2026-04-05 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:21.475517 | orchestrator | 2026-04-05 02:35:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:21.476944 | orchestrator | 2026-04-05 02:35:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:21.477003 | orchestrator | 2026-04-05 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:24.531342 | orchestrator | 2026-04-05 02:35:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:24.534178 | orchestrator | 2026-04-05 02:35:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:24.534234 | orchestrator | 2026-04-05 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:27.578292 | orchestrator | 2026-04-05 02:35:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:27.580196 | orchestrator | 2026-04-05 02:35:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:27.580241 | orchestrator | 2026-04-05 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:30.636165 | orchestrator | 2026-04-05 02:35:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:30.637665 | orchestrator | 2026-04-05 02:35:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:30.637924 | orchestrator | 2026-04-05 02:35:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:33.685498 | orchestrator | 2026-04-05 02:35:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:33.687222 | orchestrator | 2026-04-05 02:35:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:33.687367 | orchestrator | 2026-04-05 02:35:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:36.734851 | orchestrator | 2026-04-05 02:35:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:36.735791 | orchestrator | 2026-04-05 02:35:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:36.735857 | orchestrator | 2026-04-05 02:35:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:39.785369 | orchestrator | 2026-04-05 02:35:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:39.786502 | orchestrator | 2026-04-05 02:35:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:39.786602 | orchestrator | 2026-04-05 02:35:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:42.840642 | orchestrator | 2026-04-05 02:35:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:42.844084 | orchestrator | 2026-04-05 02:35:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:42.844166 | orchestrator | 2026-04-05 02:35:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:45.896873 | orchestrator | 2026-04-05 02:35:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:45.898400 | orchestrator | 2026-04-05 02:35:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:45.898449 | orchestrator | 2026-04-05 02:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:48.948562 | orchestrator | 2026-04-05 02:35:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:48.950302 | orchestrator | 2026-04-05 02:35:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:48.950367 | orchestrator | 2026-04-05 02:35:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:52.003844 | orchestrator | 2026-04-05 02:35:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:52.006130 | orchestrator | 2026-04-05 02:35:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:52.006233 | orchestrator | 2026-04-05 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:55.063777 | orchestrator | 2026-04-05 02:35:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:55.066518 | orchestrator | 2026-04-05 02:35:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:55.066793 | orchestrator | 2026-04-05 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:35:58.115943 | orchestrator | 2026-04-05 02:35:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:35:58.118392 | orchestrator | 2026-04-05 02:35:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:35:58.118462 | orchestrator | 2026-04-05 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:01.161498 | orchestrator | 2026-04-05 02:36:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:01.162485 | orchestrator | 2026-04-05 02:36:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:01.162567 | orchestrator | 2026-04-05 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:04.209876 | orchestrator | 2026-04-05 02:36:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:04.213232 | orchestrator | 2026-04-05 02:36:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:04.213313 | orchestrator | 2026-04-05 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:07.262778 | orchestrator | 2026-04-05 02:36:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:07.264102 | orchestrator | 2026-04-05 02:36:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:07.264165 | orchestrator | 2026-04-05 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:10.315398 | orchestrator | 2026-04-05 02:36:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:10.318331 | orchestrator | 2026-04-05 02:36:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:10.318388 | orchestrator | 2026-04-05 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:13.364690 | orchestrator | 2026-04-05 02:36:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:13.368633 | orchestrator | 2026-04-05 02:36:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:13.368751 | orchestrator | 2026-04-05 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:16.423555 | orchestrator | 2026-04-05 02:36:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:16.427791 | orchestrator | 2026-04-05 02:36:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:16.427885 | orchestrator | 2026-04-05 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:19.474329 | orchestrator | 2026-04-05 02:36:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:19.475673 | orchestrator | 2026-04-05 02:36:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:19.475798 | orchestrator | 2026-04-05 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:22.534818 | orchestrator | 2026-04-05 02:36:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:22.536826 | orchestrator | 2026-04-05 02:36:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:22.536878 | orchestrator | 2026-04-05 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:25.595231 | orchestrator | 2026-04-05 02:36:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:25.598520 | orchestrator | 2026-04-05 02:36:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:25.598603 | orchestrator | 2026-04-05 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:28.648818 | orchestrator | 2026-04-05 02:36:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:28.650155 | orchestrator | 2026-04-05 02:36:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:28.650259 | orchestrator | 2026-04-05 02:36:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:31.697277 | orchestrator | 2026-04-05 02:36:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:31.699056 | orchestrator | 2026-04-05 02:36:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:31.699131 | orchestrator | 2026-04-05 02:36:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:34.748536 | orchestrator | 2026-04-05 02:36:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:34.750320 | orchestrator | 2026-04-05 02:36:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:34.750427 | orchestrator | 2026-04-05 02:36:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:37.805295 | orchestrator | 2026-04-05 02:36:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:37.806255 | orchestrator | 2026-04-05 02:36:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:37.806290 | orchestrator | 2026-04-05 02:36:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:40.856222 | orchestrator | 2026-04-05 02:36:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:40.858937 | orchestrator | 2026-04-05 02:36:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:40.859008 | orchestrator | 2026-04-05 02:36:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:43.907236 | orchestrator | 2026-04-05 02:36:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:43.909078 | orchestrator | 2026-04-05 02:36:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:43.909178 | orchestrator | 2026-04-05 02:36:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:46.963542 | orchestrator | 2026-04-05 02:36:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:46.965748 | orchestrator | 2026-04-05 02:36:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:46.965804 | orchestrator | 2026-04-05 02:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:50.019795 | orchestrator | 2026-04-05 02:36:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:50.021396 | orchestrator | 2026-04-05 02:36:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:50.021450 | orchestrator | 2026-04-05 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:53.071515 | orchestrator | 2026-04-05 02:36:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:53.072878 | orchestrator | 2026-04-05 02:36:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:53.073037 | orchestrator | 2026-04-05 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:56.122313 | orchestrator | 2026-04-05 02:36:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:56.123682 | orchestrator | 2026-04-05 02:36:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:56.123737 | orchestrator | 2026-04-05 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:36:59.170179 | orchestrator | 2026-04-05 02:36:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:36:59.170966 | orchestrator | 2026-04-05 02:36:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:36:59.171041 | orchestrator | 2026-04-05 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:02.221116 | orchestrator | 2026-04-05 02:37:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:02.222757 | orchestrator | 2026-04-05 02:37:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:02.222799 | orchestrator | 2026-04-05 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:05.271560 | orchestrator | 2026-04-05 02:37:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:05.273403 | orchestrator | 2026-04-05 02:37:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:05.273455 | orchestrator | 2026-04-05 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:08.323291 | orchestrator | 2026-04-05 02:37:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:08.325136 | orchestrator | 2026-04-05 02:37:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:08.325193 | orchestrator | 2026-04-05 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:11.368098 | orchestrator | 2026-04-05 02:37:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:11.369316 | orchestrator | 2026-04-05 02:37:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:11.369357 | orchestrator | 2026-04-05 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:14.419185 | orchestrator | 2026-04-05 02:37:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:14.421990 | orchestrator | 2026-04-05 02:37:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:14.422133 | orchestrator | 2026-04-05 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:17.476420 | orchestrator | 2026-04-05 02:37:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:17.478210 | orchestrator | 2026-04-05 02:37:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:17.478300 | orchestrator | 2026-04-05 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:20.533812 | orchestrator | 2026-04-05 02:37:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:20.535793 | orchestrator | 2026-04-05 02:37:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:20.535915 | orchestrator | 2026-04-05 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:23.588273 | orchestrator | 2026-04-05 02:37:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:23.589595 | orchestrator | 2026-04-05 02:37:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:23.589640 | orchestrator | 2026-04-05 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:26.634356 | orchestrator | 2026-04-05 02:37:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:26.637082 | orchestrator | 2026-04-05 02:37:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:26.637155 | orchestrator | 2026-04-05 02:37:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:29.687889 | orchestrator | 2026-04-05 02:37:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:29.689151 | orchestrator | 2026-04-05 02:37:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:29.689198 | orchestrator | 2026-04-05 02:37:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:32.744974 | orchestrator | 2026-04-05 02:37:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:32.746637 | orchestrator | 2026-04-05 02:37:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:32.746688 | orchestrator | 2026-04-05 02:37:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:35.808868 | orchestrator | 2026-04-05 02:37:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:35.810410 | orchestrator | 2026-04-05 02:37:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:35.810508 | orchestrator | 2026-04-05 02:37:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:38.854636 | orchestrator | 2026-04-05 02:37:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:38.855947 | orchestrator | 2026-04-05 02:37:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:38.856000 | orchestrator | 2026-04-05 02:37:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:41.909137 | orchestrator | 2026-04-05 02:37:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:41.911510 | orchestrator | 2026-04-05 02:37:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:41.911582 | orchestrator | 2026-04-05 02:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:44.957304 | orchestrator | 2026-04-05 02:37:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:44.959014 | orchestrator | 2026-04-05 02:37:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:44.959608 | orchestrator | 2026-04-05 02:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:48.010477 | orchestrator | 2026-04-05 02:37:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:48.012118 | orchestrator | 2026-04-05 02:37:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:48.012161 | orchestrator | 2026-04-05 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:51.062861 | orchestrator | 2026-04-05 02:37:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:51.063750 | orchestrator | 2026-04-05 02:37:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:51.063781 | orchestrator | 2026-04-05 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:54.114277 | orchestrator | 2026-04-05 02:37:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:54.115580 | orchestrator | 2026-04-05 02:37:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:54.115639 | orchestrator | 2026-04-05 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:37:57.162522 | orchestrator | 2026-04-05 02:37:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:37:57.165150 | orchestrator | 2026-04-05 02:37:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:37:57.165260 | orchestrator | 2026-04-05 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:00.218512 | orchestrator | 2026-04-05 02:38:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:00.220237 | orchestrator | 2026-04-05 02:38:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:00.220290 | orchestrator | 2026-04-05 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:03.269950 | orchestrator | 2026-04-05 02:38:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:03.272478 | orchestrator | 2026-04-05 02:38:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:03.272559 | orchestrator | 2026-04-05 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:06.328592 | orchestrator | 2026-04-05 02:38:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:06.328946 | orchestrator | 2026-04-05 02:38:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:06.328972 | orchestrator | 2026-04-05 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:09.375515 | orchestrator | 2026-04-05 02:38:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:09.377580 | orchestrator | 2026-04-05 02:38:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:09.377913 | orchestrator | 2026-04-05 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:12.433979 | orchestrator | 2026-04-05 02:38:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:12.435316 | orchestrator | 2026-04-05 02:38:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:12.435407 | orchestrator | 2026-04-05 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:15.485404 | orchestrator | 2026-04-05 02:38:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:15.486926 | orchestrator | 2026-04-05 02:38:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:15.486979 | orchestrator | 2026-04-05 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:18.536648 | orchestrator | 2026-04-05 02:38:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:18.539656 | orchestrator | 2026-04-05 02:38:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:18.539695 | orchestrator | 2026-04-05 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:21.592209 | orchestrator | 2026-04-05 02:38:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:21.593605 | orchestrator | 2026-04-05 02:38:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:21.593655 | orchestrator | 2026-04-05 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:24.643157 | orchestrator | 2026-04-05 02:38:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:24.644856 | orchestrator | 2026-04-05 02:38:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:24.644892 | orchestrator | 2026-04-05 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:27.693948 | orchestrator | 2026-04-05 02:38:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:27.695853 | orchestrator | 2026-04-05 02:38:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:27.695905 | orchestrator | 2026-04-05 02:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:30.739621 | orchestrator | 2026-04-05 02:38:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:30.742936 | orchestrator | 2026-04-05 02:38:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:30.743034 | orchestrator | 2026-04-05 02:38:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:33.786915 | orchestrator | 2026-04-05 02:38:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:33.789275 | orchestrator | 2026-04-05 02:38:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:33.789328 | orchestrator | 2026-04-05 02:38:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:36.840812 | orchestrator | 2026-04-05 02:38:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:36.843305 | orchestrator | 2026-04-05 02:38:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:36.843404 | orchestrator | 2026-04-05 02:38:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:39.895248 | orchestrator | 2026-04-05 02:38:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:39.895935 | orchestrator | 2026-04-05 02:38:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:39.896315 | orchestrator | 2026-04-05 02:38:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:42.948489 | orchestrator | 2026-04-05 02:38:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:42.950191 | orchestrator | 2026-04-05 02:38:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:42.950228 | orchestrator | 2026-04-05 02:38:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:46.004437 | orchestrator | 2026-04-05 02:38:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:46.008688 | orchestrator | 2026-04-05 02:38:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:46.008819 | orchestrator | 2026-04-05 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:49.060938 | orchestrator | 2026-04-05 02:38:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:49.062353 | orchestrator | 2026-04-05 02:38:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:49.062390 | orchestrator | 2026-04-05 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:52.113311 | orchestrator | 2026-04-05 02:38:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:52.115306 | orchestrator | 2026-04-05 02:38:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:52.115369 | orchestrator | 2026-04-05 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:55.163405 | orchestrator | 2026-04-05 02:38:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:55.165871 | orchestrator | 2026-04-05 02:38:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:55.165934 | orchestrator | 2026-04-05 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:38:58.221286 | orchestrator | 2026-04-05 02:38:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:38:58.225925 | orchestrator | 2026-04-05 02:38:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:38:58.226118 | orchestrator | 2026-04-05 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:01.277176 | orchestrator | 2026-04-05 02:39:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:01.278881 | orchestrator | 2026-04-05 02:39:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:01.278940 | orchestrator | 2026-04-05 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:04.337879 | orchestrator | 2026-04-05 02:39:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:04.340190 | orchestrator | 2026-04-05 02:39:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:04.340251 | orchestrator | 2026-04-05 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:07.398203 | orchestrator | 2026-04-05 02:39:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:07.400348 | orchestrator | 2026-04-05 02:39:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:07.400419 | orchestrator | 2026-04-05 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:10.456128 | orchestrator | 2026-04-05 02:39:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:10.458509 | orchestrator | 2026-04-05 02:39:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:10.458607 | orchestrator | 2026-04-05 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:13.506094 | orchestrator | 2026-04-05 02:39:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:13.508040 | orchestrator | 2026-04-05 02:39:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:13.508060 | orchestrator | 2026-04-05 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:16.566009 | orchestrator | 2026-04-05 02:39:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:16.569004 | orchestrator | 2026-04-05 02:39:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:16.569053 | orchestrator | 2026-04-05 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:19.622526 | orchestrator | 2026-04-05 02:39:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:19.624439 | orchestrator | 2026-04-05 02:39:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:19.624580 | orchestrator | 2026-04-05 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:22.673192 | orchestrator | 2026-04-05 02:39:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:22.675672 | orchestrator | 2026-04-05 02:39:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:22.675789 | orchestrator | 2026-04-05 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:25.729155 | orchestrator | 2026-04-05 02:39:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:25.729474 | orchestrator | 2026-04-05 02:39:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:25.729591 | orchestrator | 2026-04-05 02:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:28.776981 | orchestrator | 2026-04-05 02:39:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:28.780649 | orchestrator | 2026-04-05 02:39:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:28.780714 | orchestrator | 2026-04-05 02:39:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:31.829872 | orchestrator | 2026-04-05 02:39:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:31.830976 | orchestrator | 2026-04-05 02:39:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:31.831021 | orchestrator | 2026-04-05 02:39:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:34.878121 | orchestrator | 2026-04-05 02:39:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:34.879589 | orchestrator | 2026-04-05 02:39:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:34.879630 | orchestrator | 2026-04-05 02:39:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:37.931454 | orchestrator | 2026-04-05 02:39:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:37.934129 | orchestrator | 2026-04-05 02:39:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:37.934210 | orchestrator | 2026-04-05 02:39:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:40.992397 | orchestrator | 2026-04-05 02:39:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:40.994513 | orchestrator | 2026-04-05 02:39:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:40.994652 | orchestrator | 2026-04-05 02:39:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:44.051468 | orchestrator | 2026-04-05 02:39:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:44.053327 | orchestrator | 2026-04-05 02:39:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:44.053387 | orchestrator | 2026-04-05 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:47.107948 | orchestrator | 2026-04-05 02:39:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:47.110380 | orchestrator | 2026-04-05 02:39:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:47.110450 | orchestrator | 2026-04-05 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:50.161867 | orchestrator | 2026-04-05 02:39:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:50.163734 | orchestrator | 2026-04-05 02:39:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:50.163831 | orchestrator | 2026-04-05 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:53.218675 | orchestrator | 2026-04-05 02:39:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:53.220238 | orchestrator | 2026-04-05 02:39:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:53.220326 | orchestrator | 2026-04-05 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:56.262425 | orchestrator | 2026-04-05 02:39:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:56.263053 | orchestrator | 2026-04-05 02:39:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:56.263111 | orchestrator | 2026-04-05 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:39:59.296921 | orchestrator | 2026-04-05 02:39:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:39:59.298208 | orchestrator | 2026-04-05 02:39:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:39:59.298255 | orchestrator | 2026-04-05 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:02.343707 | orchestrator | 2026-04-05 02:40:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:02.345233 | orchestrator | 2026-04-05 02:40:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:02.345327 | orchestrator | 2026-04-05 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:05.382747 | orchestrator | 2026-04-05 02:40:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:05.383146 | orchestrator | 2026-04-05 02:40:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:05.383439 | orchestrator | 2026-04-05 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:08.418382 | orchestrator | 2026-04-05 02:40:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:08.420815 | orchestrator | 2026-04-05 02:40:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:08.421017 | orchestrator | 2026-04-05 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:11.471862 | orchestrator | 2026-04-05 02:40:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:11.474678 | orchestrator | 2026-04-05 02:40:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:11.474749 | orchestrator | 2026-04-05 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:14.527232 | orchestrator | 2026-04-05 02:40:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:14.529742 | orchestrator | 2026-04-05 02:40:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:14.529883 | orchestrator | 2026-04-05 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:17.578280 | orchestrator | 2026-04-05 02:40:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:17.580246 | orchestrator | 2026-04-05 02:40:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:17.580301 | orchestrator | 2026-04-05 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:20.627746 | orchestrator | 2026-04-05 02:40:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:20.630474 | orchestrator | 2026-04-05 02:40:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:20.630569 | orchestrator | 2026-04-05 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:23.681280 | orchestrator | 2026-04-05 02:40:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:23.685216 | orchestrator | 2026-04-05 02:40:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:23.685302 | orchestrator | 2026-04-05 02:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:26.736557 | orchestrator | 2026-04-05 02:40:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:26.737897 | orchestrator | 2026-04-05 02:40:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:26.737949 | orchestrator | 2026-04-05 02:40:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:29.790512 | orchestrator | 2026-04-05 02:40:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:29.792542 | orchestrator | 2026-04-05 02:40:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:29.792607 | orchestrator | 2026-04-05 02:40:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:32.845677 | orchestrator | 2026-04-05 02:40:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:32.847932 | orchestrator | 2026-04-05 02:40:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:32.847980 | orchestrator | 2026-04-05 02:40:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:35.893449 | orchestrator | 2026-04-05 02:40:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:35.895397 | orchestrator | 2026-04-05 02:40:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:35.895475 | orchestrator | 2026-04-05 02:40:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:38.939760 | orchestrator | 2026-04-05 02:40:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:38.940677 | orchestrator | 2026-04-05 02:40:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:38.940711 | orchestrator | 2026-04-05 02:40:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:41.986359 | orchestrator | 2026-04-05 02:40:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:41.988753 | orchestrator | 2026-04-05 02:40:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:41.988902 | orchestrator | 2026-04-05 02:40:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:45.040459 | orchestrator | 2026-04-05 02:40:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:45.042200 | orchestrator | 2026-04-05 02:40:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:45.042261 | orchestrator | 2026-04-05 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:48.089012 | orchestrator | 2026-04-05 02:40:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:48.090090 | orchestrator | 2026-04-05 02:40:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:48.090132 | orchestrator | 2026-04-05 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:51.143008 | orchestrator | 2026-04-05 02:40:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:51.149420 | orchestrator | 2026-04-05 02:40:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:51.149506 | orchestrator | 2026-04-05 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:54.202695 | orchestrator | 2026-04-05 02:40:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:54.206116 | orchestrator | 2026-04-05 02:40:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:54.206175 | orchestrator | 2026-04-05 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:40:57.257042 | orchestrator | 2026-04-05 02:40:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:40:57.258871 | orchestrator | 2026-04-05 02:40:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:40:57.258894 | orchestrator | 2026-04-05 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:00.310629 | orchestrator | 2026-04-05 02:41:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:00.312601 | orchestrator | 2026-04-05 02:41:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:00.312645 | orchestrator | 2026-04-05 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:03.351524 | orchestrator | 2026-04-05 02:41:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:03.353091 | orchestrator | 2026-04-05 02:41:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:03.353216 | orchestrator | 2026-04-05 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:06.400111 | orchestrator | 2026-04-05 02:41:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:06.401430 | orchestrator | 2026-04-05 02:41:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:06.401481 | orchestrator | 2026-04-05 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:09.454495 | orchestrator | 2026-04-05 02:41:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:09.458608 | orchestrator | 2026-04-05 02:41:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:09.458690 | orchestrator | 2026-04-05 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:12.511992 | orchestrator | 2026-04-05 02:41:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:12.513986 | orchestrator | 2026-04-05 02:41:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:12.514141 | orchestrator | 2026-04-05 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:15.563606 | orchestrator | 2026-04-05 02:41:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:15.565164 | orchestrator | 2026-04-05 02:41:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:15.565206 | orchestrator | 2026-04-05 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:18.611396 | orchestrator | 2026-04-05 02:41:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:18.613367 | orchestrator | 2026-04-05 02:41:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:18.613420 | orchestrator | 2026-04-05 02:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:21.655420 | orchestrator | 2026-04-05 02:41:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:21.656947 | orchestrator | 2026-04-05 02:41:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:21.657005 | orchestrator | 2026-04-05 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:24.719253 | orchestrator | 2026-04-05 02:41:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:24.722085 | orchestrator | 2026-04-05 02:41:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:24.722185 | orchestrator | 2026-04-05 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:27.769131 | orchestrator | 2026-04-05 02:41:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:27.770162 | orchestrator | 2026-04-05 02:41:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:27.770213 | orchestrator | 2026-04-05 02:41:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:30.817954 | orchestrator | 2026-04-05 02:41:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:30.820030 | orchestrator | 2026-04-05 02:41:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:30.820076 | orchestrator | 2026-04-05 02:41:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:33.865395 | orchestrator | 2026-04-05 02:41:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:33.865808 | orchestrator | 2026-04-05 02:41:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:33.865841 | orchestrator | 2026-04-05 02:41:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:36.906618 | orchestrator | 2026-04-05 02:41:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:36.907543 | orchestrator | 2026-04-05 02:41:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:36.907581 | orchestrator | 2026-04-05 02:41:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:39.952407 | orchestrator | 2026-04-05 02:41:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:39.953391 | orchestrator | 2026-04-05 02:41:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:39.953433 | orchestrator | 2026-04-05 02:41:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:43.003064 | orchestrator | 2026-04-05 02:41:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:43.009094 | orchestrator | 2026-04-05 02:41:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:43.009170 | orchestrator | 2026-04-05 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:46.059153 | orchestrator | 2026-04-05 02:41:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:46.063755 | orchestrator | 2026-04-05 02:41:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:46.063857 | orchestrator | 2026-04-05 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:49.113355 | orchestrator | 2026-04-05 02:41:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:49.115100 | orchestrator | 2026-04-05 02:41:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:49.115168 | orchestrator | 2026-04-05 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:52.168160 | orchestrator | 2026-04-05 02:41:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:52.170100 | orchestrator | 2026-04-05 02:41:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:52.170188 | orchestrator | 2026-04-05 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:55.224024 | orchestrator | 2026-04-05 02:41:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:55.226469 | orchestrator | 2026-04-05 02:41:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:55.226519 | orchestrator | 2026-04-05 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:41:58.272195 | orchestrator | 2026-04-05 02:41:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:41:58.273926 | orchestrator | 2026-04-05 02:41:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:41:58.273953 | orchestrator | 2026-04-05 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:01.312229 | orchestrator | 2026-04-05 02:42:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:01.313633 | orchestrator | 2026-04-05 02:42:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:01.313691 | orchestrator | 2026-04-05 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:04.361255 | orchestrator | 2026-04-05 02:42:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:04.365749 | orchestrator | 2026-04-05 02:42:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:04.365861 | orchestrator | 2026-04-05 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:07.410401 | orchestrator | 2026-04-05 02:42:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:07.412246 | orchestrator | 2026-04-05 02:42:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:07.412304 | orchestrator | 2026-04-05 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:10.452191 | orchestrator | 2026-04-05 02:42:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:10.453011 | orchestrator | 2026-04-05 02:42:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:10.453049 | orchestrator | 2026-04-05 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:13.500002 | orchestrator | 2026-04-05 02:42:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:13.501568 | orchestrator | 2026-04-05 02:42:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:13.501672 | orchestrator | 2026-04-05 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:16.540663 | orchestrator | 2026-04-05 02:42:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:16.542866 | orchestrator | 2026-04-05 02:42:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:16.542971 | orchestrator | 2026-04-05 02:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:19.587939 | orchestrator | 2026-04-05 02:42:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:19.590600 | orchestrator | 2026-04-05 02:42:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:19.590779 | orchestrator | 2026-04-05 02:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:22.640780 | orchestrator | 2026-04-05 02:42:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:22.641444 | orchestrator | 2026-04-05 02:42:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:22.641481 | orchestrator | 2026-04-05 02:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:25.690332 | orchestrator | 2026-04-05 02:42:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:25.692114 | orchestrator | 2026-04-05 02:42:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:25.692174 | orchestrator | 2026-04-05 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:28.735018 | orchestrator | 2026-04-05 02:42:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:28.735869 | orchestrator | 2026-04-05 02:42:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:28.735918 | orchestrator | 2026-04-05 02:42:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:31.781653 | orchestrator | 2026-04-05 02:42:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:31.783253 | orchestrator | 2026-04-05 02:42:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:31.783305 | orchestrator | 2026-04-05 02:42:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:34.838004 | orchestrator | 2026-04-05 02:42:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:34.840420 | orchestrator | 2026-04-05 02:42:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:34.840513 | orchestrator | 2026-04-05 02:42:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:37.886269 | orchestrator | 2026-04-05 02:42:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:37.887281 | orchestrator | 2026-04-05 02:42:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:37.887366 | orchestrator | 2026-04-05 02:42:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:40.935676 | orchestrator | 2026-04-05 02:42:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:40.937867 | orchestrator | 2026-04-05 02:42:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:40.937907 | orchestrator | 2026-04-05 02:42:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:43.980230 | orchestrator | 2026-04-05 02:42:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:43.981354 | orchestrator | 2026-04-05 02:42:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:43.981467 | orchestrator | 2026-04-05 02:42:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:47.035007 | orchestrator | 2026-04-05 02:42:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:47.035284 | orchestrator | 2026-04-05 02:42:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:47.035951 | orchestrator | 2026-04-05 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:50.093042 | orchestrator | 2026-04-05 02:42:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:50.094779 | orchestrator | 2026-04-05 02:42:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:50.094873 | orchestrator | 2026-04-05 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:53.150567 | orchestrator | 2026-04-05 02:42:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:53.152142 | orchestrator | 2026-04-05 02:42:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:53.152222 | orchestrator | 2026-04-05 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:56.211534 | orchestrator | 2026-04-05 02:42:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:56.214159 | orchestrator | 2026-04-05 02:42:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:56.214628 | orchestrator | 2026-04-05 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:42:59.263144 | orchestrator | 2026-04-05 02:42:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:42:59.266152 | orchestrator | 2026-04-05 02:42:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:42:59.266329 | orchestrator | 2026-04-05 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:02.314426 | orchestrator | 2026-04-05 02:43:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:02.319222 | orchestrator | 2026-04-05 02:43:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:02.319400 | orchestrator | 2026-04-05 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:05.370775 | orchestrator | 2026-04-05 02:43:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:05.373130 | orchestrator | 2026-04-05 02:43:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:05.373181 | orchestrator | 2026-04-05 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:08.418406 | orchestrator | 2026-04-05 02:43:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:08.420275 | orchestrator | 2026-04-05 02:43:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:08.420341 | orchestrator | 2026-04-05 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:11.481751 | orchestrator | 2026-04-05 02:43:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:11.483792 | orchestrator | 2026-04-05 02:43:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:11.483868 | orchestrator | 2026-04-05 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:14.534974 | orchestrator | 2026-04-05 02:43:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:14.537255 | orchestrator | 2026-04-05 02:43:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:14.537310 | orchestrator | 2026-04-05 02:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:17.593462 | orchestrator | 2026-04-05 02:43:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:17.596045 | orchestrator | 2026-04-05 02:43:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:17.596106 | orchestrator | 2026-04-05 02:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:20.651902 | orchestrator | 2026-04-05 02:43:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:20.654521 | orchestrator | 2026-04-05 02:43:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:20.654571 | orchestrator | 2026-04-05 02:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:23.698397 | orchestrator | 2026-04-05 02:43:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:23.699220 | orchestrator | 2026-04-05 02:43:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:23.699279 | orchestrator | 2026-04-05 02:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:26.753502 | orchestrator | 2026-04-05 02:43:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:26.755098 | orchestrator | 2026-04-05 02:43:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:26.755368 | orchestrator | 2026-04-05 02:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:29.806807 | orchestrator | 2026-04-05 02:43:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:29.809769 | orchestrator | 2026-04-05 02:43:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:29.809870 | orchestrator | 2026-04-05 02:43:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:32.860009 | orchestrator | 2026-04-05 02:43:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:32.863037 | orchestrator | 2026-04-05 02:43:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:32.863091 | orchestrator | 2026-04-05 02:43:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:35.911464 | orchestrator | 2026-04-05 02:43:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:35.912566 | orchestrator | 2026-04-05 02:43:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:35.912620 | orchestrator | 2026-04-05 02:43:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:38.957698 | orchestrator | 2026-04-05 02:43:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:38.959460 | orchestrator | 2026-04-05 02:43:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:38.959510 | orchestrator | 2026-04-05 02:43:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:42.005622 | orchestrator | 2026-04-05 02:43:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:42.007432 | orchestrator | 2026-04-05 02:43:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:42.007493 | orchestrator | 2026-04-05 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:45.051961 | orchestrator | 2026-04-05 02:43:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:45.054426 | orchestrator | 2026-04-05 02:43:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:45.054603 | orchestrator | 2026-04-05 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:48.102663 | orchestrator | 2026-04-05 02:43:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:48.104297 | orchestrator | 2026-04-05 02:43:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:48.104396 | orchestrator | 2026-04-05 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:51.160327 | orchestrator | 2026-04-05 02:43:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:51.163694 | orchestrator | 2026-04-05 02:43:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:51.163778 | orchestrator | 2026-04-05 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:54.213808 | orchestrator | 2026-04-05 02:43:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:54.216048 | orchestrator | 2026-04-05 02:43:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:54.216115 | orchestrator | 2026-04-05 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:43:57.265749 | orchestrator | 2026-04-05 02:43:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:43:57.267629 | orchestrator | 2026-04-05 02:43:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:43:57.267691 | orchestrator | 2026-04-05 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:00.304999 | orchestrator | 2026-04-05 02:44:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:00.306699 | orchestrator | 2026-04-05 02:44:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:00.306742 | orchestrator | 2026-04-05 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:03.356980 | orchestrator | 2026-04-05 02:44:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:03.357616 | orchestrator | 2026-04-05 02:44:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:03.357645 | orchestrator | 2026-04-05 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:06.414117 | orchestrator | 2026-04-05 02:44:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:06.415825 | orchestrator | 2026-04-05 02:44:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:06.415908 | orchestrator | 2026-04-05 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:09.462134 | orchestrator | 2026-04-05 02:44:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:09.463886 | orchestrator | 2026-04-05 02:44:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:09.463922 | orchestrator | 2026-04-05 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:12.510639 | orchestrator | 2026-04-05 02:44:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:12.512628 | orchestrator | 2026-04-05 02:44:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:12.512705 | orchestrator | 2026-04-05 02:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:15.561981 | orchestrator | 2026-04-05 02:44:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:15.563633 | orchestrator | 2026-04-05 02:44:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:15.563677 | orchestrator | 2026-04-05 02:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:18.613761 | orchestrator | 2026-04-05 02:44:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:18.615416 | orchestrator | 2026-04-05 02:44:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:18.615479 | orchestrator | 2026-04-05 02:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:21.668816 | orchestrator | 2026-04-05 02:44:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:21.671864 | orchestrator | 2026-04-05 02:44:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:21.671924 | orchestrator | 2026-04-05 02:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:24.721502 | orchestrator | 2026-04-05 02:44:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:24.724221 | orchestrator | 2026-04-05 02:44:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:24.724283 | orchestrator | 2026-04-05 02:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:27.778774 | orchestrator | 2026-04-05 02:44:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:27.781363 | orchestrator | 2026-04-05 02:44:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:27.781546 | orchestrator | 2026-04-05 02:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:30.827039 | orchestrator | 2026-04-05 02:44:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:30.828706 | orchestrator | 2026-04-05 02:44:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:30.828758 | orchestrator | 2026-04-05 02:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:33.881417 | orchestrator | 2026-04-05 02:44:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:33.884900 | orchestrator | 2026-04-05 02:44:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:33.884979 | orchestrator | 2026-04-05 02:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:36.938128 | orchestrator | 2026-04-05 02:44:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:36.940290 | orchestrator | 2026-04-05 02:44:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:36.940369 | orchestrator | 2026-04-05 02:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:39.997437 | orchestrator | 2026-04-05 02:44:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:39.998960 | orchestrator | 2026-04-05 02:44:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:39.999040 | orchestrator | 2026-04-05 02:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:43.050401 | orchestrator | 2026-04-05 02:44:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:43.050638 | orchestrator | 2026-04-05 02:44:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:43.050748 | orchestrator | 2026-04-05 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:46.098673 | orchestrator | 2026-04-05 02:44:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:46.100008 | orchestrator | 2026-04-05 02:44:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:46.100056 | orchestrator | 2026-04-05 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:49.154459 | orchestrator | 2026-04-05 02:44:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:49.157047 | orchestrator | 2026-04-05 02:44:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:49.157491 | orchestrator | 2026-04-05 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:52.204287 | orchestrator | 2026-04-05 02:44:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:52.207142 | orchestrator | 2026-04-05 02:44:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:52.207199 | orchestrator | 2026-04-05 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:55.242186 | orchestrator | 2026-04-05 02:44:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:55.244975 | orchestrator | 2026-04-05 02:44:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:55.245071 | orchestrator | 2026-04-05 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:44:58.301179 | orchestrator | 2026-04-05 02:44:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:44:58.303239 | orchestrator | 2026-04-05 02:44:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:44:58.303357 | orchestrator | 2026-04-05 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:01.349193 | orchestrator | 2026-04-05 02:45:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:01.350142 | orchestrator | 2026-04-05 02:45:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:01.350181 | orchestrator | 2026-04-05 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:04.393083 | orchestrator | 2026-04-05 02:45:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:04.395568 | orchestrator | 2026-04-05 02:45:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:04.395664 | orchestrator | 2026-04-05 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:07.453294 | orchestrator | 2026-04-05 02:45:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:07.455228 | orchestrator | 2026-04-05 02:45:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:07.455354 | orchestrator | 2026-04-05 02:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:10.507788 | orchestrator | 2026-04-05 02:45:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:10.510361 | orchestrator | 2026-04-05 02:45:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:10.510414 | orchestrator | 2026-04-05 02:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:13.557416 | orchestrator | 2026-04-05 02:45:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:13.559571 | orchestrator | 2026-04-05 02:45:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:13.559994 | orchestrator | 2026-04-05 02:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:16.614561 | orchestrator | 2026-04-05 02:45:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:16.616709 | orchestrator | 2026-04-05 02:45:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:16.616969 | orchestrator | 2026-04-05 02:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:19.658542 | orchestrator | 2026-04-05 02:45:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:19.660299 | orchestrator | 2026-04-05 02:45:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:19.660385 | orchestrator | 2026-04-05 02:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:22.714149 | orchestrator | 2026-04-05 02:45:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:22.716733 | orchestrator | 2026-04-05 02:45:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:22.716796 | orchestrator | 2026-04-05 02:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:25.762830 | orchestrator | 2026-04-05 02:45:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:25.765275 | orchestrator | 2026-04-05 02:45:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:25.765364 | orchestrator | 2026-04-05 02:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:28.814818 | orchestrator | 2026-04-05 02:45:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:28.817613 | orchestrator | 2026-04-05 02:45:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:28.817699 | orchestrator | 2026-04-05 02:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:31.862011 | orchestrator | 2026-04-05 02:45:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:31.865831 | orchestrator | 2026-04-05 02:45:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:31.865886 | orchestrator | 2026-04-05 02:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:34.915706 | orchestrator | 2026-04-05 02:45:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:34.918320 | orchestrator | 2026-04-05 02:45:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:34.918397 | orchestrator | 2026-04-05 02:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:37.972113 | orchestrator | 2026-04-05 02:45:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:37.974361 | orchestrator | 2026-04-05 02:45:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:37.974434 | orchestrator | 2026-04-05 02:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:41.027665 | orchestrator | 2026-04-05 02:45:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:41.031391 | orchestrator | 2026-04-05 02:45:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:41.031469 | orchestrator | 2026-04-05 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:44.090226 | orchestrator | 2026-04-05 02:45:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:44.092463 | orchestrator | 2026-04-05 02:45:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:44.092534 | orchestrator | 2026-04-05 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:47.147224 | orchestrator | 2026-04-05 02:45:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:47.148245 | orchestrator | 2026-04-05 02:45:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:47.148283 | orchestrator | 2026-04-05 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:50.197621 | orchestrator | 2026-04-05 02:45:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:50.198285 | orchestrator | 2026-04-05 02:45:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:50.198338 | orchestrator | 2026-04-05 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:53.244843 | orchestrator | 2026-04-05 02:45:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:53.245284 | orchestrator | 2026-04-05 02:45:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:53.245586 | orchestrator | 2026-04-05 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:56.294304 | orchestrator | 2026-04-05 02:45:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:56.296270 | orchestrator | 2026-04-05 02:45:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:56.296347 | orchestrator | 2026-04-05 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:45:59.346368 | orchestrator | 2026-04-05 02:45:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:45:59.347278 | orchestrator | 2026-04-05 02:45:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:45:59.347475 | orchestrator | 2026-04-05 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:02.393515 | orchestrator | 2026-04-05 02:46:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:02.396048 | orchestrator | 2026-04-05 02:46:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:02.396094 | orchestrator | 2026-04-05 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:05.443672 | orchestrator | 2026-04-05 02:46:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:05.445261 | orchestrator | 2026-04-05 02:46:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:05.445313 | orchestrator | 2026-04-05 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:08.498618 | orchestrator | 2026-04-05 02:46:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:08.500526 | orchestrator | 2026-04-05 02:46:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:08.500580 | orchestrator | 2026-04-05 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:11.548431 | orchestrator | 2026-04-05 02:46:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:11.549746 | orchestrator | 2026-04-05 02:46:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:11.549813 | orchestrator | 2026-04-05 02:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:14.596658 | orchestrator | 2026-04-05 02:46:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:14.599137 | orchestrator | 2026-04-05 02:46:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:14.599186 | orchestrator | 2026-04-05 02:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:17.648383 | orchestrator | 2026-04-05 02:46:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:17.651004 | orchestrator | 2026-04-05 02:46:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:17.651148 | orchestrator | 2026-04-05 02:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:20.702939 | orchestrator | 2026-04-05 02:46:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:20.705136 | orchestrator | 2026-04-05 02:46:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:20.705215 | orchestrator | 2026-04-05 02:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:23.759774 | orchestrator | 2026-04-05 02:46:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:23.761053 | orchestrator | 2026-04-05 02:46:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:23.761096 | orchestrator | 2026-04-05 02:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:26.807362 | orchestrator | 2026-04-05 02:46:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:26.810673 | orchestrator | 2026-04-05 02:46:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:26.810737 | orchestrator | 2026-04-05 02:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:29.850273 | orchestrator | 2026-04-05 02:46:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:29.851953 | orchestrator | 2026-04-05 02:46:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:29.852024 | orchestrator | 2026-04-05 02:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:32.897810 | orchestrator | 2026-04-05 02:46:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:32.898532 | orchestrator | 2026-04-05 02:46:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:32.898578 | orchestrator | 2026-04-05 02:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:35.938228 | orchestrator | 2026-04-05 02:46:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:35.939277 | orchestrator | 2026-04-05 02:46:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:35.939327 | orchestrator | 2026-04-05 02:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:38.986456 | orchestrator | 2026-04-05 02:46:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:38.987927 | orchestrator | 2026-04-05 02:46:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:38.988031 | orchestrator | 2026-04-05 02:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:42.036167 | orchestrator | 2026-04-05 02:46:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:42.038209 | orchestrator | 2026-04-05 02:46:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:42.038261 | orchestrator | 2026-04-05 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:45.086513 | orchestrator | 2026-04-05 02:46:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:45.087550 | orchestrator | 2026-04-05 02:46:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:45.087600 | orchestrator | 2026-04-05 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:48.133640 | orchestrator | 2026-04-05 02:46:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:48.134600 | orchestrator | 2026-04-05 02:46:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:48.134650 | orchestrator | 2026-04-05 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:51.182279 | orchestrator | 2026-04-05 02:46:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:51.183647 | orchestrator | 2026-04-05 02:46:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:51.183672 | orchestrator | 2026-04-05 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:54.240340 | orchestrator | 2026-04-05 02:46:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:54.242588 | orchestrator | 2026-04-05 02:46:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:54.242665 | orchestrator | 2026-04-05 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:46:57.289008 | orchestrator | 2026-04-05 02:46:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:46:57.291156 | orchestrator | 2026-04-05 02:46:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:46:57.291269 | orchestrator | 2026-04-05 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:00.340351 | orchestrator | 2026-04-05 02:47:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:00.341938 | orchestrator | 2026-04-05 02:47:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:00.341972 | orchestrator | 2026-04-05 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:03.383389 | orchestrator | 2026-04-05 02:47:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:03.384090 | orchestrator | 2026-04-05 02:47:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:03.384136 | orchestrator | 2026-04-05 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:06.435658 | orchestrator | 2026-04-05 02:47:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:06.437731 | orchestrator | 2026-04-05 02:47:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:06.437779 | orchestrator | 2026-04-05 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:09.485830 | orchestrator | 2026-04-05 02:47:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:09.487962 | orchestrator | 2026-04-05 02:47:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:09.488043 | orchestrator | 2026-04-05 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:12.531894 | orchestrator | 2026-04-05 02:47:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:12.533511 | orchestrator | 2026-04-05 02:47:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:12.533544 | orchestrator | 2026-04-05 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:15.586104 | orchestrator | 2026-04-05 02:47:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:15.587594 | orchestrator | 2026-04-05 02:47:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:15.587648 | orchestrator | 2026-04-05 02:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:18.640048 | orchestrator | 2026-04-05 02:47:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:18.641032 | orchestrator | 2026-04-05 02:47:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:18.641134 | orchestrator | 2026-04-05 02:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:21.698169 | orchestrator | 2026-04-05 02:47:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:21.700222 | orchestrator | 2026-04-05 02:47:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:21.700350 | orchestrator | 2026-04-05 02:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:24.754864 | orchestrator | 2026-04-05 02:47:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:24.756608 | orchestrator | 2026-04-05 02:47:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:24.756684 | orchestrator | 2026-04-05 02:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:27.812237 | orchestrator | 2026-04-05 02:47:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:27.814072 | orchestrator | 2026-04-05 02:47:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:27.814122 | orchestrator | 2026-04-05 02:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:30.855273 | orchestrator | 2026-04-05 02:47:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:30.857426 | orchestrator | 2026-04-05 02:47:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:30.857497 | orchestrator | 2026-04-05 02:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:33.905887 | orchestrator | 2026-04-05 02:47:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:33.908418 | orchestrator | 2026-04-05 02:47:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:33.908478 | orchestrator | 2026-04-05 02:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:36.952673 | orchestrator | 2026-04-05 02:47:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:36.955965 | orchestrator | 2026-04-05 02:47:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:36.956022 | orchestrator | 2026-04-05 02:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:40.004225 | orchestrator | 2026-04-05 02:47:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:40.005061 | orchestrator | 2026-04-05 02:47:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:40.005094 | orchestrator | 2026-04-05 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:43.053035 | orchestrator | 2026-04-05 02:47:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:43.055118 | orchestrator | 2026-04-05 02:47:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:43.055172 | orchestrator | 2026-04-05 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:46.106328 | orchestrator | 2026-04-05 02:47:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:46.109132 | orchestrator | 2026-04-05 02:47:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:46.109198 | orchestrator | 2026-04-05 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:49.159563 | orchestrator | 2026-04-05 02:47:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:49.161549 | orchestrator | 2026-04-05 02:47:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:49.161616 | orchestrator | 2026-04-05 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:52.211964 | orchestrator | 2026-04-05 02:47:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:52.214148 | orchestrator | 2026-04-05 02:47:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:52.214201 | orchestrator | 2026-04-05 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:55.265068 | orchestrator | 2026-04-05 02:47:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:55.267022 | orchestrator | 2026-04-05 02:47:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:55.267105 | orchestrator | 2026-04-05 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:47:58.316767 | orchestrator | 2026-04-05 02:47:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:47:58.318572 | orchestrator | 2026-04-05 02:47:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:47:58.318628 | orchestrator | 2026-04-05 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:01.367873 | orchestrator | 2026-04-05 02:48:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:01.370429 | orchestrator | 2026-04-05 02:48:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:01.370493 | orchestrator | 2026-04-05 02:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:04.414332 | orchestrator | 2026-04-05 02:48:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:04.415814 | orchestrator | 2026-04-05 02:48:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:04.415895 | orchestrator | 2026-04-05 02:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:07.459121 | orchestrator | 2026-04-05 02:48:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:07.459442 | orchestrator | 2026-04-05 02:48:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:07.459483 | orchestrator | 2026-04-05 02:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:10.511951 | orchestrator | 2026-04-05 02:48:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:10.513945 | orchestrator | 2026-04-05 02:48:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:10.514105 | orchestrator | 2026-04-05 02:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:13.565563 | orchestrator | 2026-04-05 02:48:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:13.567187 | orchestrator | 2026-04-05 02:48:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:13.567244 | orchestrator | 2026-04-05 02:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:16.616822 | orchestrator | 2026-04-05 02:48:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:16.619536 | orchestrator | 2026-04-05 02:48:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:16.619610 | orchestrator | 2026-04-05 02:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:19.658817 | orchestrator | 2026-04-05 02:48:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:19.662014 | orchestrator | 2026-04-05 02:48:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:19.662605 | orchestrator | 2026-04-05 02:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:22.707254 | orchestrator | 2026-04-05 02:48:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:22.711375 | orchestrator | 2026-04-05 02:48:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:22.711462 | orchestrator | 2026-04-05 02:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:25.757815 | orchestrator | 2026-04-05 02:48:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:25.760041 | orchestrator | 2026-04-05 02:48:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:25.760615 | orchestrator | 2026-04-05 02:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:28.809630 | orchestrator | 2026-04-05 02:48:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:28.811862 | orchestrator | 2026-04-05 02:48:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:28.811916 | orchestrator | 2026-04-05 02:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:31.856749 | orchestrator | 2026-04-05 02:48:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:31.858831 | orchestrator | 2026-04-05 02:48:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:31.858891 | orchestrator | 2026-04-05 02:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:34.901683 | orchestrator | 2026-04-05 02:48:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:34.902630 | orchestrator | 2026-04-05 02:48:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:34.902676 | orchestrator | 2026-04-05 02:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:37.953386 | orchestrator | 2026-04-05 02:48:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:37.955211 | orchestrator | 2026-04-05 02:48:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:37.955289 | orchestrator | 2026-04-05 02:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:41.001910 | orchestrator | 2026-04-05 02:48:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:41.004612 | orchestrator | 2026-04-05 02:48:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:41.004678 | orchestrator | 2026-04-05 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:44.039051 | orchestrator | 2026-04-05 02:48:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:44.039476 | orchestrator | 2026-04-05 02:48:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:44.039823 | orchestrator | 2026-04-05 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:47.083356 | orchestrator | 2026-04-05 02:48:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:47.085913 | orchestrator | 2026-04-05 02:48:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:47.085998 | orchestrator | 2026-04-05 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:50.135988 | orchestrator | 2026-04-05 02:48:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:50.137486 | orchestrator | 2026-04-05 02:48:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:50.137553 | orchestrator | 2026-04-05 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:53.186135 | orchestrator | 2026-04-05 02:48:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:53.187731 | orchestrator | 2026-04-05 02:48:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:53.187775 | orchestrator | 2026-04-05 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:56.243014 | orchestrator | 2026-04-05 02:48:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:56.244221 | orchestrator | 2026-04-05 02:48:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:56.244253 | orchestrator | 2026-04-05 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:48:59.286760 | orchestrator | 2026-04-05 02:48:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:48:59.288862 | orchestrator | 2026-04-05 02:48:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:48:59.288929 | orchestrator | 2026-04-05 02:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:02.341080 | orchestrator | 2026-04-05 02:49:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:02.342516 | orchestrator | 2026-04-05 02:49:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:02.342564 | orchestrator | 2026-04-05 02:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:05.397766 | orchestrator | 2026-04-05 02:49:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:05.399795 | orchestrator | 2026-04-05 02:49:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:05.399889 | orchestrator | 2026-04-05 02:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:08.453026 | orchestrator | 2026-04-05 02:49:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:08.455807 | orchestrator | 2026-04-05 02:49:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:08.455881 | orchestrator | 2026-04-05 02:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:11.512093 | orchestrator | 2026-04-05 02:49:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:11.514601 | orchestrator | 2026-04-05 02:49:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:11.514681 | orchestrator | 2026-04-05 02:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:14.565710 | orchestrator | 2026-04-05 02:49:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:14.568654 | orchestrator | 2026-04-05 02:49:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:14.568740 | orchestrator | 2026-04-05 02:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:17.624295 | orchestrator | 2026-04-05 02:49:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:17.628036 | orchestrator | 2026-04-05 02:49:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:17.628123 | orchestrator | 2026-04-05 02:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:20.673005 | orchestrator | 2026-04-05 02:49:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:20.674088 | orchestrator | 2026-04-05 02:49:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:20.674132 | orchestrator | 2026-04-05 02:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:23.724800 | orchestrator | 2026-04-05 02:49:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:23.726176 | orchestrator | 2026-04-05 02:49:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:23.726225 | orchestrator | 2026-04-05 02:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:26.780919 | orchestrator | 2026-04-05 02:49:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:26.781803 | orchestrator | 2026-04-05 02:49:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:26.781825 | orchestrator | 2026-04-05 02:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:29.841886 | orchestrator | 2026-04-05 02:49:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:29.845147 | orchestrator | 2026-04-05 02:49:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:29.845275 | orchestrator | 2026-04-05 02:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:32.902469 | orchestrator | 2026-04-05 02:49:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:32.905948 | orchestrator | 2026-04-05 02:49:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:32.906241 | orchestrator | 2026-04-05 02:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:35.960994 | orchestrator | 2026-04-05 02:49:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:35.962298 | orchestrator | 2026-04-05 02:49:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:35.962349 | orchestrator | 2026-04-05 02:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:39.010830 | orchestrator | 2026-04-05 02:49:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:39.012015 | orchestrator | 2026-04-05 02:49:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:39.012446 | orchestrator | 2026-04-05 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:42.059602 | orchestrator | 2026-04-05 02:49:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:42.061832 | orchestrator | 2026-04-05 02:49:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:42.061893 | orchestrator | 2026-04-05 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:45.111465 | orchestrator | 2026-04-05 02:49:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:45.113122 | orchestrator | 2026-04-05 02:49:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:45.113163 | orchestrator | 2026-04-05 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:48.160884 | orchestrator | 2026-04-05 02:49:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:48.162855 | orchestrator | 2026-04-05 02:49:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:48.162912 | orchestrator | 2026-04-05 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:51.207787 | orchestrator | 2026-04-05 02:49:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:51.211142 | orchestrator | 2026-04-05 02:49:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:51.211224 | orchestrator | 2026-04-05 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:54.258482 | orchestrator | 2026-04-05 02:49:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:54.260820 | orchestrator | 2026-04-05 02:49:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:54.260855 | orchestrator | 2026-04-05 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:49:57.308259 | orchestrator | 2026-04-05 02:49:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:49:57.311001 | orchestrator | 2026-04-05 02:49:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:49:57.311063 | orchestrator | 2026-04-05 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:00.362993 | orchestrator | 2026-04-05 02:50:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:00.365959 | orchestrator | 2026-04-05 02:50:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:00.366086 | orchestrator | 2026-04-05 02:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:03.414517 | orchestrator | 2026-04-05 02:50:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:03.416117 | orchestrator | 2026-04-05 02:50:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:03.416165 | orchestrator | 2026-04-05 02:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:06.469053 | orchestrator | 2026-04-05 02:50:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:06.471002 | orchestrator | 2026-04-05 02:50:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:06.471087 | orchestrator | 2026-04-05 02:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:09.515918 | orchestrator | 2026-04-05 02:50:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:09.516408 | orchestrator | 2026-04-05 02:50:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:09.516429 | orchestrator | 2026-04-05 02:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:12.567904 | orchestrator | 2026-04-05 02:50:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:12.570229 | orchestrator | 2026-04-05 02:50:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:12.570320 | orchestrator | 2026-04-05 02:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:15.621590 | orchestrator | 2026-04-05 02:50:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:15.623744 | orchestrator | 2026-04-05 02:50:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:15.623817 | orchestrator | 2026-04-05 02:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:18.667720 | orchestrator | 2026-04-05 02:50:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:18.669917 | orchestrator | 2026-04-05 02:50:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:18.670087 | orchestrator | 2026-04-05 02:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:21.719221 | orchestrator | 2026-04-05 02:50:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:21.721569 | orchestrator | 2026-04-05 02:50:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:21.721678 | orchestrator | 2026-04-05 02:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:24.764439 | orchestrator | 2026-04-05 02:50:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:24.765711 | orchestrator | 2026-04-05 02:50:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:24.765814 | orchestrator | 2026-04-05 02:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:27.819068 | orchestrator | 2026-04-05 02:50:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:27.819509 | orchestrator | 2026-04-05 02:50:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:27.819546 | orchestrator | 2026-04-05 02:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:30.859158 | orchestrator | 2026-04-05 02:50:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:30.859567 | orchestrator | 2026-04-05 02:50:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:30.859657 | orchestrator | 2026-04-05 02:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:33.905436 | orchestrator | 2026-04-05 02:50:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:33.906856 | orchestrator | 2026-04-05 02:50:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:33.906912 | orchestrator | 2026-04-05 02:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:36.946750 | orchestrator | 2026-04-05 02:50:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:36.948410 | orchestrator | 2026-04-05 02:50:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:36.948500 | orchestrator | 2026-04-05 02:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:39.990972 | orchestrator | 2026-04-05 02:50:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:39.992028 | orchestrator | 2026-04-05 02:50:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:39.992076 | orchestrator | 2026-04-05 02:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:43.039554 | orchestrator | 2026-04-05 02:50:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:43.039651 | orchestrator | 2026-04-05 02:50:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:43.039662 | orchestrator | 2026-04-05 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:46.075416 | orchestrator | 2026-04-05 02:50:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:46.075967 | orchestrator | 2026-04-05 02:50:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:46.076054 | orchestrator | 2026-04-05 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:49.109641 | orchestrator | 2026-04-05 02:50:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:49.110844 | orchestrator | 2026-04-05 02:50:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:49.110900 | orchestrator | 2026-04-05 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:52.159197 | orchestrator | 2026-04-05 02:50:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:52.160827 | orchestrator | 2026-04-05 02:50:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:52.160893 | orchestrator | 2026-04-05 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:55.198554 | orchestrator | 2026-04-05 02:50:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:55.199872 | orchestrator | 2026-04-05 02:50:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:55.199929 | orchestrator | 2026-04-05 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:50:58.261035 | orchestrator | 2026-04-05 02:50:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:50:58.262714 | orchestrator | 2026-04-05 02:50:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:50:58.262768 | orchestrator | 2026-04-05 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:01.310255 | orchestrator | 2026-04-05 02:51:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:01.312122 | orchestrator | 2026-04-05 02:51:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:01.312197 | orchestrator | 2026-04-05 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:04.360124 | orchestrator | 2026-04-05 02:51:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:04.361854 | orchestrator | 2026-04-05 02:51:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:04.361933 | orchestrator | 2026-04-05 02:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:07.404827 | orchestrator | 2026-04-05 02:51:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:07.405679 | orchestrator | 2026-04-05 02:51:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:07.405734 | orchestrator | 2026-04-05 02:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:10.451524 | orchestrator | 2026-04-05 02:51:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:10.452795 | orchestrator | 2026-04-05 02:51:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:10.452925 | orchestrator | 2026-04-05 02:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:13.506321 | orchestrator | 2026-04-05 02:51:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:13.508817 | orchestrator | 2026-04-05 02:51:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:13.508870 | orchestrator | 2026-04-05 02:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:16.559165 | orchestrator | 2026-04-05 02:51:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:16.561188 | orchestrator | 2026-04-05 02:51:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:16.561234 | orchestrator | 2026-04-05 02:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:19.611110 | orchestrator | 2026-04-05 02:51:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:19.613321 | orchestrator | 2026-04-05 02:51:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:19.613378 | orchestrator | 2026-04-05 02:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:22.662658 | orchestrator | 2026-04-05 02:51:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:22.663690 | orchestrator | 2026-04-05 02:51:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:22.663855 | orchestrator | 2026-04-05 02:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:25.707460 | orchestrator | 2026-04-05 02:51:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:25.708469 | orchestrator | 2026-04-05 02:51:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:25.708506 | orchestrator | 2026-04-05 02:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:28.764333 | orchestrator | 2026-04-05 02:51:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:28.766736 | orchestrator | 2026-04-05 02:51:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:28.767146 | orchestrator | 2026-04-05 02:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:31.813806 | orchestrator | 2026-04-05 02:51:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:31.816102 | orchestrator | 2026-04-05 02:51:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:31.816184 | orchestrator | 2026-04-05 02:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:34.858532 | orchestrator | 2026-04-05 02:51:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:34.859192 | orchestrator | 2026-04-05 02:51:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:34.859413 | orchestrator | 2026-04-05 02:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:37.907362 | orchestrator | 2026-04-05 02:51:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:37.908816 | orchestrator | 2026-04-05 02:51:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:37.908871 | orchestrator | 2026-04-05 02:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:40.955887 | orchestrator | 2026-04-05 02:51:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:40.957389 | orchestrator | 2026-04-05 02:51:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:40.957443 | orchestrator | 2026-04-05 02:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:44.013236 | orchestrator | 2026-04-05 02:51:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:44.015389 | orchestrator | 2026-04-05 02:51:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:44.015443 | orchestrator | 2026-04-05 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:47.063164 | orchestrator | 2026-04-05 02:51:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:47.064889 | orchestrator | 2026-04-05 02:51:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:47.064923 | orchestrator | 2026-04-05 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:50.122529 | orchestrator | 2026-04-05 02:51:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:50.125085 | orchestrator | 2026-04-05 02:51:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:50.125131 | orchestrator | 2026-04-05 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:53.175970 | orchestrator | 2026-04-05 02:51:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:53.178588 | orchestrator | 2026-04-05 02:51:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:53.178662 | orchestrator | 2026-04-05 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:56.227681 | orchestrator | 2026-04-05 02:51:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:56.230322 | orchestrator | 2026-04-05 02:51:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:56.230395 | orchestrator | 2026-04-05 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:51:59.271292 | orchestrator | 2026-04-05 02:51:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:51:59.272851 | orchestrator | 2026-04-05 02:51:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:51:59.272867 | orchestrator | 2026-04-05 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:02.321699 | orchestrator | 2026-04-05 02:52:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:02.323134 | orchestrator | 2026-04-05 02:52:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:02.323179 | orchestrator | 2026-04-05 02:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:05.374333 | orchestrator | 2026-04-05 02:52:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:05.375646 | orchestrator | 2026-04-05 02:52:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:05.375711 | orchestrator | 2026-04-05 02:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:08.421340 | orchestrator | 2026-04-05 02:52:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:08.422476 | orchestrator | 2026-04-05 02:52:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:08.422511 | orchestrator | 2026-04-05 02:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:11.473214 | orchestrator | 2026-04-05 02:52:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:11.475718 | orchestrator | 2026-04-05 02:52:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:11.475807 | orchestrator | 2026-04-05 02:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:14.519921 | orchestrator | 2026-04-05 02:52:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:14.521155 | orchestrator | 2026-04-05 02:52:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:14.521248 | orchestrator | 2026-04-05 02:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:17.571667 | orchestrator | 2026-04-05 02:52:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:17.573526 | orchestrator | 2026-04-05 02:52:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:17.573668 | orchestrator | 2026-04-05 02:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:20.616842 | orchestrator | 2026-04-05 02:52:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:20.618408 | orchestrator | 2026-04-05 02:52:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:20.618457 | orchestrator | 2026-04-05 02:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:23.667768 | orchestrator | 2026-04-05 02:52:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:23.669417 | orchestrator | 2026-04-05 02:52:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:23.669499 | orchestrator | 2026-04-05 02:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:26.731952 | orchestrator | 2026-04-05 02:52:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:26.735944 | orchestrator | 2026-04-05 02:52:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:26.736015 | orchestrator | 2026-04-05 02:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:29.789713 | orchestrator | 2026-04-05 02:52:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:29.792891 | orchestrator | 2026-04-05 02:52:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:29.792960 | orchestrator | 2026-04-05 02:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:32.845563 | orchestrator | 2026-04-05 02:52:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:32.848928 | orchestrator | 2026-04-05 02:52:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:32.848993 | orchestrator | 2026-04-05 02:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:35.904506 | orchestrator | 2026-04-05 02:52:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:35.906991 | orchestrator | 2026-04-05 02:52:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:35.907062 | orchestrator | 2026-04-05 02:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:38.955364 | orchestrator | 2026-04-05 02:52:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:38.957706 | orchestrator | 2026-04-05 02:52:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:38.957861 | orchestrator | 2026-04-05 02:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:42.010295 | orchestrator | 2026-04-05 02:52:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:42.013677 | orchestrator | 2026-04-05 02:52:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:42.013737 | orchestrator | 2026-04-05 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:45.060428 | orchestrator | 2026-04-05 02:52:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:45.062286 | orchestrator | 2026-04-05 02:52:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:45.062342 | orchestrator | 2026-04-05 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:48.112342 | orchestrator | 2026-04-05 02:52:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:48.117273 | orchestrator | 2026-04-05 02:52:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:48.117378 | orchestrator | 2026-04-05 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:51.168119 | orchestrator | 2026-04-05 02:52:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:51.171096 | orchestrator | 2026-04-05 02:52:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:51.171233 | orchestrator | 2026-04-05 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:54.225744 | orchestrator | 2026-04-05 02:52:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:54.227092 | orchestrator | 2026-04-05 02:52:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:54.227173 | orchestrator | 2026-04-05 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:52:57.271780 | orchestrator | 2026-04-05 02:52:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:52:57.273239 | orchestrator | 2026-04-05 02:52:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:52:57.273306 | orchestrator | 2026-04-05 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:00.324049 | orchestrator | 2026-04-05 02:53:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:00.324895 | orchestrator | 2026-04-05 02:53:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:00.324932 | orchestrator | 2026-04-05 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:03.372203 | orchestrator | 2026-04-05 02:53:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:03.373799 | orchestrator | 2026-04-05 02:53:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:03.373936 | orchestrator | 2026-04-05 02:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:06.420775 | orchestrator | 2026-04-05 02:53:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:06.421995 | orchestrator | 2026-04-05 02:53:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:06.422217 | orchestrator | 2026-04-05 02:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:09.468303 | orchestrator | 2026-04-05 02:53:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:09.469399 | orchestrator | 2026-04-05 02:53:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:09.469442 | orchestrator | 2026-04-05 02:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:12.519624 | orchestrator | 2026-04-05 02:53:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:12.521505 | orchestrator | 2026-04-05 02:53:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:12.521631 | orchestrator | 2026-04-05 02:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:15.567593 | orchestrator | 2026-04-05 02:53:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:15.570885 | orchestrator | 2026-04-05 02:53:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:15.571013 | orchestrator | 2026-04-05 02:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:18.620725 | orchestrator | 2026-04-05 02:53:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:18.621960 | orchestrator | 2026-04-05 02:53:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:18.622162 | orchestrator | 2026-04-05 02:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:21.672368 | orchestrator | 2026-04-05 02:53:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:21.673618 | orchestrator | 2026-04-05 02:53:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:21.673651 | orchestrator | 2026-04-05 02:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:24.724232 | orchestrator | 2026-04-05 02:53:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:24.725342 | orchestrator | 2026-04-05 02:53:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:24.725406 | orchestrator | 2026-04-05 02:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:27.779337 | orchestrator | 2026-04-05 02:53:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:27.781738 | orchestrator | 2026-04-05 02:53:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:27.781941 | orchestrator | 2026-04-05 02:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:30.827636 | orchestrator | 2026-04-05 02:53:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:30.829673 | orchestrator | 2026-04-05 02:53:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:30.829748 | orchestrator | 2026-04-05 02:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:33.880214 | orchestrator | 2026-04-05 02:53:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:33.883309 | orchestrator | 2026-04-05 02:53:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:33.883365 | orchestrator | 2026-04-05 02:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:36.933815 | orchestrator | 2026-04-05 02:53:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:36.936702 | orchestrator | 2026-04-05 02:53:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:36.936793 | orchestrator | 2026-04-05 02:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:39.981049 | orchestrator | 2026-04-05 02:53:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:39.984096 | orchestrator | 2026-04-05 02:53:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:39.984162 | orchestrator | 2026-04-05 02:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:43.034574 | orchestrator | 2026-04-05 02:53:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:43.035586 | orchestrator | 2026-04-05 02:53:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:43.035657 | orchestrator | 2026-04-05 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:46.074808 | orchestrator | 2026-04-05 02:53:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:46.075171 | orchestrator | 2026-04-05 02:53:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:46.075382 | orchestrator | 2026-04-05 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:49.124870 | orchestrator | 2026-04-05 02:53:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:49.126256 | orchestrator | 2026-04-05 02:53:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:49.126305 | orchestrator | 2026-04-05 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:52.166919 | orchestrator | 2026-04-05 02:53:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:52.168391 | orchestrator | 2026-04-05 02:53:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:52.168465 | orchestrator | 2026-04-05 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:55.221134 | orchestrator | 2026-04-05 02:53:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:55.223742 | orchestrator | 2026-04-05 02:53:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:55.223796 | orchestrator | 2026-04-05 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:53:58.272756 | orchestrator | 2026-04-05 02:53:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:53:58.274646 | orchestrator | 2026-04-05 02:53:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:53:58.274699 | orchestrator | 2026-04-05 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:01.316958 | orchestrator | 2026-04-05 02:54:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:01.319347 | orchestrator | 2026-04-05 02:54:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:01.319420 | orchestrator | 2026-04-05 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:04.368108 | orchestrator | 2026-04-05 02:54:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:04.371261 | orchestrator | 2026-04-05 02:54:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:04.371329 | orchestrator | 2026-04-05 02:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:07.434489 | orchestrator | 2026-04-05 02:54:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:07.435407 | orchestrator | 2026-04-05 02:54:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:07.435582 | orchestrator | 2026-04-05 02:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:10.483565 | orchestrator | 2026-04-05 02:54:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:10.487188 | orchestrator | 2026-04-05 02:54:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:10.487237 | orchestrator | 2026-04-05 02:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:13.541297 | orchestrator | 2026-04-05 02:54:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:13.542945 | orchestrator | 2026-04-05 02:54:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:13.543003 | orchestrator | 2026-04-05 02:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:16.589408 | orchestrator | 2026-04-05 02:54:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:16.590780 | orchestrator | 2026-04-05 02:54:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:16.590802 | orchestrator | 2026-04-05 02:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:19.640202 | orchestrator | 2026-04-05 02:54:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:19.642690 | orchestrator | 2026-04-05 02:54:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:19.642764 | orchestrator | 2026-04-05 02:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:22.701397 | orchestrator | 2026-04-05 02:54:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:22.702542 | orchestrator | 2026-04-05 02:54:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:22.702609 | orchestrator | 2026-04-05 02:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:25.743012 | orchestrator | 2026-04-05 02:54:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:25.744948 | orchestrator | 2026-04-05 02:54:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:25.745014 | orchestrator | 2026-04-05 02:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:28.783125 | orchestrator | 2026-04-05 02:54:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:28.784128 | orchestrator | 2026-04-05 02:54:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:28.784182 | orchestrator | 2026-04-05 02:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:31.836979 | orchestrator | 2026-04-05 02:54:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:31.840821 | orchestrator | 2026-04-05 02:54:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:31.840904 | orchestrator | 2026-04-05 02:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:34.896545 | orchestrator | 2026-04-05 02:54:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:34.897816 | orchestrator | 2026-04-05 02:54:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:34.898823 | orchestrator | 2026-04-05 02:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:37.941349 | orchestrator | 2026-04-05 02:54:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:37.942173 | orchestrator | 2026-04-05 02:54:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:37.942560 | orchestrator | 2026-04-05 02:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:40.997670 | orchestrator | 2026-04-05 02:54:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:40.998833 | orchestrator | 2026-04-05 02:54:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:40.998898 | orchestrator | 2026-04-05 02:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:44.042397 | orchestrator | 2026-04-05 02:54:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:44.043855 | orchestrator | 2026-04-05 02:54:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:44.044191 | orchestrator | 2026-04-05 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:47.089902 | orchestrator | 2026-04-05 02:54:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:47.091822 | orchestrator | 2026-04-05 02:54:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:47.091918 | orchestrator | 2026-04-05 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:50.138148 | orchestrator | 2026-04-05 02:54:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:50.140582 | orchestrator | 2026-04-05 02:54:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:50.140701 | orchestrator | 2026-04-05 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:53.194384 | orchestrator | 2026-04-05 02:54:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:53.195644 | orchestrator | 2026-04-05 02:54:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:53.195755 | orchestrator | 2026-04-05 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:56.240586 | orchestrator | 2026-04-05 02:54:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:56.242821 | orchestrator | 2026-04-05 02:54:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:56.242876 | orchestrator | 2026-04-05 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:54:59.289916 | orchestrator | 2026-04-05 02:54:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:54:59.291340 | orchestrator | 2026-04-05 02:54:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:54:59.291397 | orchestrator | 2026-04-05 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:55:02.341014 | orchestrator | 2026-04-05 02:55:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:55:02.344797 | orchestrator | 2026-04-05 02:55:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:55:02.344872 | orchestrator | 2026-04-05 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:55:05.394850 | orchestrator | 2026-04-05 02:55:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:55:05.396953 | orchestrator | 2026-04-05 02:55:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:55:05.397104 | orchestrator | 2026-04-05 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:55:08.449029 | orchestrator | 2026-04-05 02:55:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:55:08.449966 | orchestrator | 2026-04-05 02:55:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:55:08.450007 | orchestrator | 2026-04-05 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:11.607359 | orchestrator | 2026-04-05 02:57:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:11.607550 | orchestrator | 2026-04-05 02:57:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:11.607594 | orchestrator | 2026-04-05 02:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:14.651766 | orchestrator | 2026-04-05 02:57:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:14.653318 | orchestrator | 2026-04-05 02:57:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:14.653404 | orchestrator | 2026-04-05 02:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:17.700437 | orchestrator | 2026-04-05 02:57:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:17.702708 | orchestrator | 2026-04-05 02:57:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:17.702747 | orchestrator | 2026-04-05 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:20.750818 | orchestrator | 2026-04-05 02:57:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:20.752579 | orchestrator | 2026-04-05 02:57:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:20.752734 | orchestrator | 2026-04-05 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:23.796738 | orchestrator | 2026-04-05 02:57:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:23.797790 | orchestrator | 2026-04-05 02:57:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:23.798720 | orchestrator | 2026-04-05 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:26.841125 | orchestrator | 2026-04-05 02:57:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:26.842349 | orchestrator | 2026-04-05 02:57:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:26.842504 | orchestrator | 2026-04-05 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:29.890542 | orchestrator | 2026-04-05 02:57:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:29.892425 | orchestrator | 2026-04-05 02:57:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:29.892514 | orchestrator | 2026-04-05 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:32.944271 | orchestrator | 2026-04-05 02:57:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:32.946598 | orchestrator | 2026-04-05 02:57:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:32.946646 | orchestrator | 2026-04-05 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:35.994097 | orchestrator | 2026-04-05 02:57:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:35.995201 | orchestrator | 2026-04-05 02:57:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:35.995330 | orchestrator | 2026-04-05 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:39.039204 | orchestrator | 2026-04-05 02:57:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:39.042101 | orchestrator | 2026-04-05 02:57:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:39.042161 | orchestrator | 2026-04-05 02:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:42.092283 | orchestrator | 2026-04-05 02:57:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:42.094368 | orchestrator | 2026-04-05 02:57:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:42.094414 | orchestrator | 2026-04-05 02:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:45.144401 | orchestrator | 2026-04-05 02:57:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:45.145705 | orchestrator | 2026-04-05 02:57:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:45.145733 | orchestrator | 2026-04-05 02:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:48.189630 | orchestrator | 2026-04-05 02:57:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:48.193314 | orchestrator | 2026-04-05 02:57:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:48.193400 | orchestrator | 2026-04-05 02:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:51.243579 | orchestrator | 2026-04-05 02:57:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:51.246330 | orchestrator | 2026-04-05 02:57:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:51.246427 | orchestrator | 2026-04-05 02:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:54.288310 | orchestrator | 2026-04-05 02:57:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:54.290768 | orchestrator | 2026-04-05 02:57:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:54.290818 | orchestrator | 2026-04-05 02:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:57:57.332672 | orchestrator | 2026-04-05 02:57:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:57:57.334819 | orchestrator | 2026-04-05 02:57:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:57:57.334885 | orchestrator | 2026-04-05 02:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:00.374269 | orchestrator | 2026-04-05 02:58:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:00.375927 | orchestrator | 2026-04-05 02:58:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:00.375974 | orchestrator | 2026-04-05 02:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:03.427808 | orchestrator | 2026-04-05 02:58:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:03.430469 | orchestrator | 2026-04-05 02:58:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:03.430548 | orchestrator | 2026-04-05 02:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:06.482779 | orchestrator | 2026-04-05 02:58:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:06.484936 | orchestrator | 2026-04-05 02:58:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:06.484998 | orchestrator | 2026-04-05 02:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:09.524792 | orchestrator | 2026-04-05 02:58:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:09.526335 | orchestrator | 2026-04-05 02:58:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:09.526402 | orchestrator | 2026-04-05 02:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:12.559654 | orchestrator | 2026-04-05 02:58:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:12.561035 | orchestrator | 2026-04-05 02:58:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:12.561076 | orchestrator | 2026-04-05 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:15.611349 | orchestrator | 2026-04-05 02:58:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:15.614067 | orchestrator | 2026-04-05 02:58:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:15.614147 | orchestrator | 2026-04-05 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:18.662779 | orchestrator | 2026-04-05 02:58:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:18.663417 | orchestrator | 2026-04-05 02:58:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:18.663473 | orchestrator | 2026-04-05 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:21.713819 | orchestrator | 2026-04-05 02:58:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:21.714819 | orchestrator | 2026-04-05 02:58:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:21.714976 | orchestrator | 2026-04-05 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:24.751031 | orchestrator | 2026-04-05 02:58:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:24.752641 | orchestrator | 2026-04-05 02:58:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:24.752712 | orchestrator | 2026-04-05 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:27.786325 | orchestrator | 2026-04-05 02:58:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:27.787179 | orchestrator | 2026-04-05 02:58:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:27.787230 | orchestrator | 2026-04-05 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:30.822323 | orchestrator | 2026-04-05 02:58:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:30.824951 | orchestrator | 2026-04-05 02:58:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:30.824994 | orchestrator | 2026-04-05 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:33.858815 | orchestrator | 2026-04-05 02:58:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:33.860259 | orchestrator | 2026-04-05 02:58:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:33.860306 | orchestrator | 2026-04-05 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:36.897986 | orchestrator | 2026-04-05 02:58:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:36.902522 | orchestrator | 2026-04-05 02:58:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:36.902689 | orchestrator | 2026-04-05 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:39.949253 | orchestrator | 2026-04-05 02:58:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:39.950313 | orchestrator | 2026-04-05 02:58:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:39.950403 | orchestrator | 2026-04-05 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:42.994006 | orchestrator | 2026-04-05 02:58:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:42.997300 | orchestrator | 2026-04-05 02:58:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:42.997384 | orchestrator | 2026-04-05 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:46.040968 | orchestrator | 2026-04-05 02:58:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:46.042286 | orchestrator | 2026-04-05 02:58:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:46.042350 | orchestrator | 2026-04-05 02:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:49.092968 | orchestrator | 2026-04-05 02:58:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:49.093136 | orchestrator | 2026-04-05 02:58:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:49.093168 | orchestrator | 2026-04-05 02:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:52.134365 | orchestrator | 2026-04-05 02:58:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:52.135566 | orchestrator | 2026-04-05 02:58:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:52.135631 | orchestrator | 2026-04-05 02:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:55.173549 | orchestrator | 2026-04-05 02:58:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:55.174074 | orchestrator | 2026-04-05 02:58:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:55.174144 | orchestrator | 2026-04-05 02:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:58:58.215922 | orchestrator | 2026-04-05 02:58:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:58:58.217611 | orchestrator | 2026-04-05 02:58:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:58:58.217661 | orchestrator | 2026-04-05 02:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:01.257860 | orchestrator | 2026-04-05 02:59:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:01.260874 | orchestrator | 2026-04-05 02:59:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:01.260950 | orchestrator | 2026-04-05 02:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:04.296118 | orchestrator | 2026-04-05 02:59:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:04.296786 | orchestrator | 2026-04-05 02:59:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:04.296821 | orchestrator | 2026-04-05 02:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:07.347328 | orchestrator | 2026-04-05 02:59:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:07.348371 | orchestrator | 2026-04-05 02:59:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:07.348414 | orchestrator | 2026-04-05 02:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:10.396023 | orchestrator | 2026-04-05 02:59:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:10.398719 | orchestrator | 2026-04-05 02:59:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:10.398793 | orchestrator | 2026-04-05 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:13.438754 | orchestrator | 2026-04-05 02:59:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:13.441118 | orchestrator | 2026-04-05 02:59:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:13.441215 | orchestrator | 2026-04-05 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:16.480277 | orchestrator | 2026-04-05 02:59:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:16.482799 | orchestrator | 2026-04-05 02:59:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:16.482841 | orchestrator | 2026-04-05 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:19.522526 | orchestrator | 2026-04-05 02:59:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:19.524279 | orchestrator | 2026-04-05 02:59:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:19.524460 | orchestrator | 2026-04-05 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:22.578628 | orchestrator | 2026-04-05 02:59:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:22.580407 | orchestrator | 2026-04-05 02:59:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:22.580486 | orchestrator | 2026-04-05 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:25.621574 | orchestrator | 2026-04-05 02:59:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:25.623863 | orchestrator | 2026-04-05 02:59:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:25.624231 | orchestrator | 2026-04-05 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:28.666621 | orchestrator | 2026-04-05 02:59:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:28.668603 | orchestrator | 2026-04-05 02:59:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:28.668644 | orchestrator | 2026-04-05 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:31.708800 | orchestrator | 2026-04-05 02:59:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:31.710703 | orchestrator | 2026-04-05 02:59:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:31.710761 | orchestrator | 2026-04-05 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:34.764031 | orchestrator | 2026-04-05 02:59:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:34.765801 | orchestrator | 2026-04-05 02:59:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:34.765863 | orchestrator | 2026-04-05 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:37.815744 | orchestrator | 2026-04-05 02:59:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:37.818325 | orchestrator | 2026-04-05 02:59:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:37.818445 | orchestrator | 2026-04-05 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:40.861738 | orchestrator | 2026-04-05 02:59:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:40.862955 | orchestrator | 2026-04-05 02:59:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:40.862991 | orchestrator | 2026-04-05 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:43.908279 | orchestrator | 2026-04-05 02:59:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:43.911075 | orchestrator | 2026-04-05 02:59:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:43.911136 | orchestrator | 2026-04-05 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:46.959706 | orchestrator | 2026-04-05 02:59:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:46.962314 | orchestrator | 2026-04-05 02:59:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:46.962402 | orchestrator | 2026-04-05 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:50.005848 | orchestrator | 2026-04-05 02:59:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:50.007677 | orchestrator | 2026-04-05 02:59:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:50.007729 | orchestrator | 2026-04-05 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:53.045591 | orchestrator | 2026-04-05 02:59:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:53.047166 | orchestrator | 2026-04-05 02:59:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:53.047355 | orchestrator | 2026-04-05 02:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:56.083949 | orchestrator | 2026-04-05 02:59:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:56.086587 | orchestrator | 2026-04-05 02:59:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:56.086640 | orchestrator | 2026-04-05 02:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 02:59:59.129414 | orchestrator | 2026-04-05 02:59:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 02:59:59.131168 | orchestrator | 2026-04-05 02:59:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 02:59:59.131246 | orchestrator | 2026-04-05 02:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:02.182319 | orchestrator | 2026-04-05 03:00:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:02.185351 | orchestrator | 2026-04-05 03:00:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:02.185561 | orchestrator | 2026-04-05 03:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:05.225083 | orchestrator | 2026-04-05 03:00:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:05.226962 | orchestrator | 2026-04-05 03:00:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:05.227018 | orchestrator | 2026-04-05 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:08.272820 | orchestrator | 2026-04-05 03:00:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:08.273651 | orchestrator | 2026-04-05 03:00:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:08.273742 | orchestrator | 2026-04-05 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:11.319256 | orchestrator | 2026-04-05 03:00:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:11.320061 | orchestrator | 2026-04-05 03:00:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:11.320218 | orchestrator | 2026-04-05 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:14.359082 | orchestrator | 2026-04-05 03:00:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:14.359441 | orchestrator | 2026-04-05 03:00:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:14.359474 | orchestrator | 2026-04-05 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:17.412521 | orchestrator | 2026-04-05 03:00:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:17.413773 | orchestrator | 2026-04-05 03:00:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:17.413827 | orchestrator | 2026-04-05 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:20.459615 | orchestrator | 2026-04-05 03:00:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:20.461885 | orchestrator | 2026-04-05 03:00:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:20.461985 | orchestrator | 2026-04-05 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:23.514758 | orchestrator | 2026-04-05 03:00:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:23.515751 | orchestrator | 2026-04-05 03:00:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:23.515804 | orchestrator | 2026-04-05 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:26.560857 | orchestrator | 2026-04-05 03:00:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:26.560954 | orchestrator | 2026-04-05 03:00:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:26.560968 | orchestrator | 2026-04-05 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:29.616670 | orchestrator | 2026-04-05 03:00:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:29.617401 | orchestrator | 2026-04-05 03:00:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:29.617527 | orchestrator | 2026-04-05 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:32.658490 | orchestrator | 2026-04-05 03:00:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:32.661286 | orchestrator | 2026-04-05 03:00:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:32.661381 | orchestrator | 2026-04-05 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:35.705163 | orchestrator | 2026-04-05 03:00:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:35.706536 | orchestrator | 2026-04-05 03:00:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:35.706740 | orchestrator | 2026-04-05 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:38.757323 | orchestrator | 2026-04-05 03:00:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:38.758069 | orchestrator | 2026-04-05 03:00:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:38.758093 | orchestrator | 2026-04-05 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:41.810171 | orchestrator | 2026-04-05 03:00:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:41.810475 | orchestrator | 2026-04-05 03:00:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:41.810504 | orchestrator | 2026-04-05 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:44.859608 | orchestrator | 2026-04-05 03:00:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:44.863130 | orchestrator | 2026-04-05 03:00:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:44.863169 | orchestrator | 2026-04-05 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:47.908308 | orchestrator | 2026-04-05 03:00:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:47.911032 | orchestrator | 2026-04-05 03:00:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:47.911085 | orchestrator | 2026-04-05 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:50.963773 | orchestrator | 2026-04-05 03:00:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:50.964879 | orchestrator | 2026-04-05 03:00:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:50.964927 | orchestrator | 2026-04-05 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:54.019521 | orchestrator | 2026-04-05 03:00:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:54.023167 | orchestrator | 2026-04-05 03:00:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:54.023466 | orchestrator | 2026-04-05 03:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:00:57.069379 | orchestrator | 2026-04-05 03:00:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:00:57.074799 | orchestrator | 2026-04-05 03:00:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:00:57.074891 | orchestrator | 2026-04-05 03:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:00.109951 | orchestrator | 2026-04-05 03:01:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:00.112601 | orchestrator | 2026-04-05 03:01:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:00.112797 | orchestrator | 2026-04-05 03:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:03.165798 | orchestrator | 2026-04-05 03:01:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:03.168273 | orchestrator | 2026-04-05 03:01:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:03.168491 | orchestrator | 2026-04-05 03:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:06.221713 | orchestrator | 2026-04-05 03:01:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:06.224121 | orchestrator | 2026-04-05 03:01:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:06.224216 | orchestrator | 2026-04-05 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:09.269158 | orchestrator | 2026-04-05 03:01:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:09.271385 | orchestrator | 2026-04-05 03:01:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:09.271502 | orchestrator | 2026-04-05 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:12.323933 | orchestrator | 2026-04-05 03:01:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:12.325850 | orchestrator | 2026-04-05 03:01:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:12.326143 | orchestrator | 2026-04-05 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:15.375527 | orchestrator | 2026-04-05 03:01:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:15.377227 | orchestrator | 2026-04-05 03:01:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:15.377362 | orchestrator | 2026-04-05 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:18.419205 | orchestrator | 2026-04-05 03:01:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:18.421374 | orchestrator | 2026-04-05 03:01:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:18.421451 | orchestrator | 2026-04-05 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:21.468914 | orchestrator | 2026-04-05 03:01:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:21.471559 | orchestrator | 2026-04-05 03:01:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:21.471645 | orchestrator | 2026-04-05 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:24.520677 | orchestrator | 2026-04-05 03:01:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:24.521841 | orchestrator | 2026-04-05 03:01:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:24.521893 | orchestrator | 2026-04-05 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:27.567365 | orchestrator | 2026-04-05 03:01:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:27.570612 | orchestrator | 2026-04-05 03:01:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:27.570671 | orchestrator | 2026-04-05 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:30.620655 | orchestrator | 2026-04-05 03:01:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:30.622871 | orchestrator | 2026-04-05 03:01:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:30.622926 | orchestrator | 2026-04-05 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:33.667539 | orchestrator | 2026-04-05 03:01:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:33.668203 | orchestrator | 2026-04-05 03:01:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:33.668230 | orchestrator | 2026-04-05 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:36.719317 | orchestrator | 2026-04-05 03:01:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:36.720865 | orchestrator | 2026-04-05 03:01:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:36.720989 | orchestrator | 2026-04-05 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:39.771598 | orchestrator | 2026-04-05 03:01:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:39.771781 | orchestrator | 2026-04-05 03:01:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:39.771803 | orchestrator | 2026-04-05 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:42.823997 | orchestrator | 2026-04-05 03:01:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:42.825355 | orchestrator | 2026-04-05 03:01:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:42.825457 | orchestrator | 2026-04-05 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:45.868699 | orchestrator | 2026-04-05 03:01:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:45.870736 | orchestrator | 2026-04-05 03:01:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:45.870807 | orchestrator | 2026-04-05 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:48.908522 | orchestrator | 2026-04-05 03:01:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:48.909894 | orchestrator | 2026-04-05 03:01:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:48.909981 | orchestrator | 2026-04-05 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:51.959533 | orchestrator | 2026-04-05 03:01:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:51.960458 | orchestrator | 2026-04-05 03:01:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:51.960496 | orchestrator | 2026-04-05 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:55.014668 | orchestrator | 2026-04-05 03:01:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:55.017051 | orchestrator | 2026-04-05 03:01:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:55.017213 | orchestrator | 2026-04-05 03:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:01:58.071218 | orchestrator | 2026-04-05 03:01:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:01:58.073560 | orchestrator | 2026-04-05 03:01:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:01:58.073624 | orchestrator | 2026-04-05 03:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:01.115156 | orchestrator | 2026-04-05 03:02:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:01.117018 | orchestrator | 2026-04-05 03:02:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:01.117090 | orchestrator | 2026-04-05 03:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:04.162351 | orchestrator | 2026-04-05 03:02:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:04.164082 | orchestrator | 2026-04-05 03:02:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:04.164144 | orchestrator | 2026-04-05 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:07.205841 | orchestrator | 2026-04-05 03:02:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:07.208309 | orchestrator | 2026-04-05 03:02:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:07.208427 | orchestrator | 2026-04-05 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:10.259833 | orchestrator | 2026-04-05 03:02:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:10.260084 | orchestrator | 2026-04-05 03:02:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:10.260116 | orchestrator | 2026-04-05 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:13.313684 | orchestrator | 2026-04-05 03:02:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:13.313943 | orchestrator | 2026-04-05 03:02:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:13.313997 | orchestrator | 2026-04-05 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:16.352906 | orchestrator | 2026-04-05 03:02:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:16.354160 | orchestrator | 2026-04-05 03:02:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:16.354197 | orchestrator | 2026-04-05 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:19.397292 | orchestrator | 2026-04-05 03:02:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:19.398736 | orchestrator | 2026-04-05 03:02:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:19.398785 | orchestrator | 2026-04-05 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:22.450241 | orchestrator | 2026-04-05 03:02:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:22.451758 | orchestrator | 2026-04-05 03:02:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:22.451798 | orchestrator | 2026-04-05 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:25.493927 | orchestrator | 2026-04-05 03:02:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:25.496226 | orchestrator | 2026-04-05 03:02:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:25.496293 | orchestrator | 2026-04-05 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:28.535314 | orchestrator | 2026-04-05 03:02:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:28.537048 | orchestrator | 2026-04-05 03:02:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:28.537108 | orchestrator | 2026-04-05 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:31.578559 | orchestrator | 2026-04-05 03:02:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:31.580669 | orchestrator | 2026-04-05 03:02:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:31.580739 | orchestrator | 2026-04-05 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:34.618268 | orchestrator | 2026-04-05 03:02:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:34.619968 | orchestrator | 2026-04-05 03:02:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:34.620009 | orchestrator | 2026-04-05 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:37.668557 | orchestrator | 2026-04-05 03:02:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:37.672746 | orchestrator | 2026-04-05 03:02:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:37.672810 | orchestrator | 2026-04-05 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:40.718525 | orchestrator | 2026-04-05 03:02:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:40.720615 | orchestrator | 2026-04-05 03:02:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:40.720724 | orchestrator | 2026-04-05 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:43.768461 | orchestrator | 2026-04-05 03:02:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:43.771083 | orchestrator | 2026-04-05 03:02:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:43.771255 | orchestrator | 2026-04-05 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:46.816429 | orchestrator | 2026-04-05 03:02:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:46.818230 | orchestrator | 2026-04-05 03:02:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:46.818274 | orchestrator | 2026-04-05 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:49.859835 | orchestrator | 2026-04-05 03:02:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:49.860675 | orchestrator | 2026-04-05 03:02:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:49.861214 | orchestrator | 2026-04-05 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:52.909743 | orchestrator | 2026-04-05 03:02:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:52.911172 | orchestrator | 2026-04-05 03:02:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:52.911914 | orchestrator | 2026-04-05 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:55.969058 | orchestrator | 2026-04-05 03:02:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:55.969475 | orchestrator | 2026-04-05 03:02:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:55.969774 | orchestrator | 2026-04-05 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:02:59.012070 | orchestrator | 2026-04-05 03:02:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:02:59.012169 | orchestrator | 2026-04-05 03:02:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:02:59.012181 | orchestrator | 2026-04-05 03:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:02.064641 | orchestrator | 2026-04-05 03:03:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:02.065029 | orchestrator | 2026-04-05 03:03:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:02.065060 | orchestrator | 2026-04-05 03:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:05.110620 | orchestrator | 2026-04-05 03:03:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:05.110894 | orchestrator | 2026-04-05 03:03:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:05.110918 | orchestrator | 2026-04-05 03:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:08.156737 | orchestrator | 2026-04-05 03:03:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:08.157448 | orchestrator | 2026-04-05 03:03:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:08.157508 | orchestrator | 2026-04-05 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:11.208224 | orchestrator | 2026-04-05 03:03:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:11.209553 | orchestrator | 2026-04-05 03:03:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:11.209598 | orchestrator | 2026-04-05 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:14.255610 | orchestrator | 2026-04-05 03:03:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:14.259248 | orchestrator | 2026-04-05 03:03:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:14.259318 | orchestrator | 2026-04-05 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:17.304443 | orchestrator | 2026-04-05 03:03:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:17.306430 | orchestrator | 2026-04-05 03:03:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:17.306484 | orchestrator | 2026-04-05 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:20.352952 | orchestrator | 2026-04-05 03:03:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:20.353923 | orchestrator | 2026-04-05 03:03:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:20.353999 | orchestrator | 2026-04-05 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:23.397916 | orchestrator | 2026-04-05 03:03:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:23.400070 | orchestrator | 2026-04-05 03:03:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:23.400135 | orchestrator | 2026-04-05 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:26.439919 | orchestrator | 2026-04-05 03:03:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:26.441691 | orchestrator | 2026-04-05 03:03:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:26.441743 | orchestrator | 2026-04-05 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:29.484957 | orchestrator | 2026-04-05 03:03:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:29.485414 | orchestrator | 2026-04-05 03:03:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:29.485441 | orchestrator | 2026-04-05 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:32.526537 | orchestrator | 2026-04-05 03:03:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:32.528103 | orchestrator | 2026-04-05 03:03:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:32.528153 | orchestrator | 2026-04-05 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:35.576225 | orchestrator | 2026-04-05 03:03:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:35.578487 | orchestrator | 2026-04-05 03:03:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:35.578624 | orchestrator | 2026-04-05 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:38.623823 | orchestrator | 2026-04-05 03:03:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:38.625748 | orchestrator | 2026-04-05 03:03:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:38.625796 | orchestrator | 2026-04-05 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:41.668777 | orchestrator | 2026-04-05 03:03:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:41.670119 | orchestrator | 2026-04-05 03:03:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:41.670161 | orchestrator | 2026-04-05 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:44.706684 | orchestrator | 2026-04-05 03:03:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:44.708066 | orchestrator | 2026-04-05 03:03:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:44.708091 | orchestrator | 2026-04-05 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:47.750822 | orchestrator | 2026-04-05 03:03:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:47.753354 | orchestrator | 2026-04-05 03:03:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:47.753441 | orchestrator | 2026-04-05 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:50.802261 | orchestrator | 2026-04-05 03:03:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:50.803138 | orchestrator | 2026-04-05 03:03:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:50.803170 | orchestrator | 2026-04-05 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:53.850732 | orchestrator | 2026-04-05 03:03:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:53.851973 | orchestrator | 2026-04-05 03:03:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:53.852028 | orchestrator | 2026-04-05 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:56.902646 | orchestrator | 2026-04-05 03:03:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:56.903664 | orchestrator | 2026-04-05 03:03:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:56.903711 | orchestrator | 2026-04-05 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:03:59.948552 | orchestrator | 2026-04-05 03:03:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:03:59.951935 | orchestrator | 2026-04-05 03:03:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:03:59.952116 | orchestrator | 2026-04-05 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:02.994570 | orchestrator | 2026-04-05 03:04:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:02.995277 | orchestrator | 2026-04-05 03:04:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:02.995318 | orchestrator | 2026-04-05 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:06.036455 | orchestrator | 2026-04-05 03:04:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:06.038862 | orchestrator | 2026-04-05 03:04:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:06.038947 | orchestrator | 2026-04-05 03:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:09.082530 | orchestrator | 2026-04-05 03:04:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:09.084495 | orchestrator | 2026-04-05 03:04:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:09.084581 | orchestrator | 2026-04-05 03:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:12.119488 | orchestrator | 2026-04-05 03:04:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:12.120173 | orchestrator | 2026-04-05 03:04:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:12.120270 | orchestrator | 2026-04-05 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:15.158106 | orchestrator | 2026-04-05 03:04:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:15.159925 | orchestrator | 2026-04-05 03:04:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:15.159989 | orchestrator | 2026-04-05 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:18.202083 | orchestrator | 2026-04-05 03:04:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:18.204586 | orchestrator | 2026-04-05 03:04:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:18.204667 | orchestrator | 2026-04-05 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:21.246337 | orchestrator | 2026-04-05 03:04:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:21.247527 | orchestrator | 2026-04-05 03:04:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:21.247589 | orchestrator | 2026-04-05 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:24.299669 | orchestrator | 2026-04-05 03:04:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:24.301552 | orchestrator | 2026-04-05 03:04:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:24.301597 | orchestrator | 2026-04-05 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:27.348668 | orchestrator | 2026-04-05 03:04:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:27.349120 | orchestrator | 2026-04-05 03:04:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:27.349159 | orchestrator | 2026-04-05 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:30.393853 | orchestrator | 2026-04-05 03:04:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:30.395350 | orchestrator | 2026-04-05 03:04:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:30.395435 | orchestrator | 2026-04-05 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:33.440257 | orchestrator | 2026-04-05 03:04:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:33.442183 | orchestrator | 2026-04-05 03:04:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:33.442225 | orchestrator | 2026-04-05 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:36.481667 | orchestrator | 2026-04-05 03:04:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:36.482715 | orchestrator | 2026-04-05 03:04:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:36.482776 | orchestrator | 2026-04-05 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:39.518785 | orchestrator | 2026-04-05 03:04:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:39.521034 | orchestrator | 2026-04-05 03:04:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:39.521097 | orchestrator | 2026-04-05 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:42.572422 | orchestrator | 2026-04-05 03:04:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:42.574543 | orchestrator | 2026-04-05 03:04:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:42.574621 | orchestrator | 2026-04-05 03:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:45.624597 | orchestrator | 2026-04-05 03:04:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:45.626602 | orchestrator | 2026-04-05 03:04:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:45.626718 | orchestrator | 2026-04-05 03:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:48.677498 | orchestrator | 2026-04-05 03:04:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:48.678277 | orchestrator | 2026-04-05 03:04:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:48.678317 | orchestrator | 2026-04-05 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:51.721502 | orchestrator | 2026-04-05 03:04:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:51.723882 | orchestrator | 2026-04-05 03:04:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:51.723943 | orchestrator | 2026-04-05 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:54.769724 | orchestrator | 2026-04-05 03:04:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:54.770409 | orchestrator | 2026-04-05 03:04:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:54.770446 | orchestrator | 2026-04-05 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:04:57.820887 | orchestrator | 2026-04-05 03:04:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:04:57.822481 | orchestrator | 2026-04-05 03:04:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:04:57.822540 | orchestrator | 2026-04-05 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:00.869079 | orchestrator | 2026-04-05 03:05:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:00.870552 | orchestrator | 2026-04-05 03:05:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:00.870617 | orchestrator | 2026-04-05 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:03.915819 | orchestrator | 2026-04-05 03:05:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:03.917645 | orchestrator | 2026-04-05 03:05:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:03.917725 | orchestrator | 2026-04-05 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:06.965128 | orchestrator | 2026-04-05 03:05:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:06.966661 | orchestrator | 2026-04-05 03:05:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:06.966853 | orchestrator | 2026-04-05 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:10.021947 | orchestrator | 2026-04-05 03:05:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:10.024075 | orchestrator | 2026-04-05 03:05:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:10.024140 | orchestrator | 2026-04-05 03:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:13.067631 | orchestrator | 2026-04-05 03:05:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:13.068809 | orchestrator | 2026-04-05 03:05:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:13.068929 | orchestrator | 2026-04-05 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:16.108568 | orchestrator | 2026-04-05 03:05:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:16.110479 | orchestrator | 2026-04-05 03:05:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:16.110548 | orchestrator | 2026-04-05 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:19.163796 | orchestrator | 2026-04-05 03:05:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:19.165420 | orchestrator | 2026-04-05 03:05:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:19.165490 | orchestrator | 2026-04-05 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:22.212940 | orchestrator | 2026-04-05 03:05:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:22.213916 | orchestrator | 2026-04-05 03:05:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:22.213968 | orchestrator | 2026-04-05 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:25.261581 | orchestrator | 2026-04-05 03:05:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:25.262796 | orchestrator | 2026-04-05 03:05:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:25.262819 | orchestrator | 2026-04-05 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:28.316431 | orchestrator | 2026-04-05 03:05:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:28.318919 | orchestrator | 2026-04-05 03:05:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:28.318978 | orchestrator | 2026-04-05 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:31.367748 | orchestrator | 2026-04-05 03:05:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:31.370084 | orchestrator | 2026-04-05 03:05:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:31.370123 | orchestrator | 2026-04-05 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:34.415434 | orchestrator | 2026-04-05 03:05:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:34.417759 | orchestrator | 2026-04-05 03:05:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:34.417854 | orchestrator | 2026-04-05 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:37.463268 | orchestrator | 2026-04-05 03:05:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:37.464628 | orchestrator | 2026-04-05 03:05:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:37.464743 | orchestrator | 2026-04-05 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:40.519221 | orchestrator | 2026-04-05 03:05:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:40.521544 | orchestrator | 2026-04-05 03:05:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:40.521780 | orchestrator | 2026-04-05 03:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:43.575447 | orchestrator | 2026-04-05 03:05:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:43.577366 | orchestrator | 2026-04-05 03:05:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:43.577425 | orchestrator | 2026-04-05 03:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:46.624854 | orchestrator | 2026-04-05 03:05:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:46.626008 | orchestrator | 2026-04-05 03:05:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:46.626114 | orchestrator | 2026-04-05 03:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:49.675524 | orchestrator | 2026-04-05 03:05:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:49.677043 | orchestrator | 2026-04-05 03:05:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:49.677083 | orchestrator | 2026-04-05 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:52.725982 | orchestrator | 2026-04-05 03:05:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:52.727753 | orchestrator | 2026-04-05 03:05:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:52.727790 | orchestrator | 2026-04-05 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:55.783743 | orchestrator | 2026-04-05 03:05:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:55.784920 | orchestrator | 2026-04-05 03:05:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:55.785008 | orchestrator | 2026-04-05 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:05:58.835094 | orchestrator | 2026-04-05 03:05:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:05:58.836004 | orchestrator | 2026-04-05 03:05:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:05:58.836054 | orchestrator | 2026-04-05 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:01.880917 | orchestrator | 2026-04-05 03:06:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:01.882519 | orchestrator | 2026-04-05 03:06:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:01.882574 | orchestrator | 2026-04-05 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:04.929067 | orchestrator | 2026-04-05 03:06:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:04.930262 | orchestrator | 2026-04-05 03:06:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:04.930386 | orchestrator | 2026-04-05 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:07.981515 | orchestrator | 2026-04-05 03:06:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:07.982149 | orchestrator | 2026-04-05 03:06:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:07.982192 | orchestrator | 2026-04-05 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:11.023866 | orchestrator | 2026-04-05 03:06:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:11.026524 | orchestrator | 2026-04-05 03:06:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:11.026607 | orchestrator | 2026-04-05 03:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:14.070497 | orchestrator | 2026-04-05 03:06:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:14.072720 | orchestrator | 2026-04-05 03:06:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:14.072773 | orchestrator | 2026-04-05 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:17.127203 | orchestrator | 2026-04-05 03:06:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:17.128540 | orchestrator | 2026-04-05 03:06:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:17.129002 | orchestrator | 2026-04-05 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:20.167692 | orchestrator | 2026-04-05 03:06:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:20.169248 | orchestrator | 2026-04-05 03:06:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:20.169301 | orchestrator | 2026-04-05 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:23.224764 | orchestrator | 2026-04-05 03:06:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:23.227987 | orchestrator | 2026-04-05 03:06:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:23.228098 | orchestrator | 2026-04-05 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:26.271829 | orchestrator | 2026-04-05 03:06:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:26.273154 | orchestrator | 2026-04-05 03:06:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:26.273207 | orchestrator | 2026-04-05 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:29.323287 | orchestrator | 2026-04-05 03:06:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:29.324948 | orchestrator | 2026-04-05 03:06:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:29.324999 | orchestrator | 2026-04-05 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:32.381114 | orchestrator | 2026-04-05 03:06:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:32.381611 | orchestrator | 2026-04-05 03:06:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:32.381702 | orchestrator | 2026-04-05 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:35.431301 | orchestrator | 2026-04-05 03:06:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:35.435535 | orchestrator | 2026-04-05 03:06:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:35.435593 | orchestrator | 2026-04-05 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:38.489990 | orchestrator | 2026-04-05 03:06:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:38.492778 | orchestrator | 2026-04-05 03:06:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:38.492972 | orchestrator | 2026-04-05 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:41.547673 | orchestrator | 2026-04-05 03:06:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:41.549911 | orchestrator | 2026-04-05 03:06:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:41.549988 | orchestrator | 2026-04-05 03:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:44.604744 | orchestrator | 2026-04-05 03:06:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:44.606466 | orchestrator | 2026-04-05 03:06:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:44.606499 | orchestrator | 2026-04-05 03:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:47.657917 | orchestrator | 2026-04-05 03:06:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:47.660683 | orchestrator | 2026-04-05 03:06:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:47.660740 | orchestrator | 2026-04-05 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:50.719509 | orchestrator | 2026-04-05 03:06:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:50.723493 | orchestrator | 2026-04-05 03:06:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:50.723582 | orchestrator | 2026-04-05 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:53.772478 | orchestrator | 2026-04-05 03:06:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:53.772731 | orchestrator | 2026-04-05 03:06:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:53.772751 | orchestrator | 2026-04-05 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:56.830765 | orchestrator | 2026-04-05 03:06:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:56.833716 | orchestrator | 2026-04-05 03:06:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:56.833797 | orchestrator | 2026-04-05 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:06:59.877448 | orchestrator | 2026-04-05 03:06:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:06:59.878496 | orchestrator | 2026-04-05 03:06:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:06:59.878543 | orchestrator | 2026-04-05 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:02.928100 | orchestrator | 2026-04-05 03:07:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:02.930078 | orchestrator | 2026-04-05 03:07:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:02.930146 | orchestrator | 2026-04-05 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:05.974483 | orchestrator | 2026-04-05 03:07:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:05.976374 | orchestrator | 2026-04-05 03:07:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:05.976455 | orchestrator | 2026-04-05 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:09.020978 | orchestrator | 2026-04-05 03:07:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:09.022972 | orchestrator | 2026-04-05 03:07:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:09.023016 | orchestrator | 2026-04-05 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:12.074630 | orchestrator | 2026-04-05 03:07:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:12.076583 | orchestrator | 2026-04-05 03:07:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:12.076624 | orchestrator | 2026-04-05 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:15.127557 | orchestrator | 2026-04-05 03:07:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:15.128011 | orchestrator | 2026-04-05 03:07:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:15.128084 | orchestrator | 2026-04-05 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:18.178619 | orchestrator | 2026-04-05 03:07:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:18.181478 | orchestrator | 2026-04-05 03:07:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:18.181558 | orchestrator | 2026-04-05 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:21.232560 | orchestrator | 2026-04-05 03:07:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:21.234475 | orchestrator | 2026-04-05 03:07:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:21.234728 | orchestrator | 2026-04-05 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:24.289239 | orchestrator | 2026-04-05 03:07:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:24.290317 | orchestrator | 2026-04-05 03:07:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:24.290448 | orchestrator | 2026-04-05 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:27.339218 | orchestrator | 2026-04-05 03:07:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:27.342668 | orchestrator | 2026-04-05 03:07:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:27.342763 | orchestrator | 2026-04-05 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:30.393805 | orchestrator | 2026-04-05 03:07:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:30.394004 | orchestrator | 2026-04-05 03:07:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:30.394070 | orchestrator | 2026-04-05 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:33.443847 | orchestrator | 2026-04-05 03:07:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:33.445922 | orchestrator | 2026-04-05 03:07:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:33.445950 | orchestrator | 2026-04-05 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:36.493205 | orchestrator | 2026-04-05 03:07:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:36.494239 | orchestrator | 2026-04-05 03:07:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:36.494371 | orchestrator | 2026-04-05 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:39.546224 | orchestrator | 2026-04-05 03:07:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:39.548071 | orchestrator | 2026-04-05 03:07:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:39.548117 | orchestrator | 2026-04-05 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:42.604482 | orchestrator | 2026-04-05 03:07:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:42.606521 | orchestrator | 2026-04-05 03:07:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:42.606587 | orchestrator | 2026-04-05 03:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:45.662438 | orchestrator | 2026-04-05 03:07:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:45.663694 | orchestrator | 2026-04-05 03:07:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:45.663880 | orchestrator | 2026-04-05 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:48.704364 | orchestrator | 2026-04-05 03:07:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:48.705114 | orchestrator | 2026-04-05 03:07:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:48.705256 | orchestrator | 2026-04-05 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:51.757860 | orchestrator | 2026-04-05 03:07:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:51.761429 | orchestrator | 2026-04-05 03:07:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:51.761523 | orchestrator | 2026-04-05 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:54.804702 | orchestrator | 2026-04-05 03:07:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:54.805274 | orchestrator | 2026-04-05 03:07:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:54.805318 | orchestrator | 2026-04-05 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:07:57.858610 | orchestrator | 2026-04-05 03:07:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:07:57.860655 | orchestrator | 2026-04-05 03:07:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:07:57.860715 | orchestrator | 2026-04-05 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:00.909968 | orchestrator | 2026-04-05 03:08:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:00.910778 | orchestrator | 2026-04-05 03:08:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:00.910835 | orchestrator | 2026-04-05 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:03.964456 | orchestrator | 2026-04-05 03:08:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:03.966595 | orchestrator | 2026-04-05 03:08:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:03.967047 | orchestrator | 2026-04-05 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:07.020183 | orchestrator | 2026-04-05 03:08:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:07.021368 | orchestrator | 2026-04-05 03:08:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:07.021420 | orchestrator | 2026-04-05 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:10.071988 | orchestrator | 2026-04-05 03:08:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:10.073981 | orchestrator | 2026-04-05 03:08:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:10.074086 | orchestrator | 2026-04-05 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:13.122798 | orchestrator | 2026-04-05 03:08:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:13.124981 | orchestrator | 2026-04-05 03:08:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:13.125672 | orchestrator | 2026-04-05 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:16.175816 | orchestrator | 2026-04-05 03:08:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:16.176207 | orchestrator | 2026-04-05 03:08:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:16.176223 | orchestrator | 2026-04-05 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:19.225579 | orchestrator | 2026-04-05 03:08:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:19.225960 | orchestrator | 2026-04-05 03:08:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:19.225993 | orchestrator | 2026-04-05 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:22.270962 | orchestrator | 2026-04-05 03:08:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:22.273825 | orchestrator | 2026-04-05 03:08:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:22.273927 | orchestrator | 2026-04-05 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:25.320223 | orchestrator | 2026-04-05 03:08:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:25.321622 | orchestrator | 2026-04-05 03:08:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:25.321673 | orchestrator | 2026-04-05 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:28.373351 | orchestrator | 2026-04-05 03:08:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:28.376198 | orchestrator | 2026-04-05 03:08:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:28.376267 | orchestrator | 2026-04-05 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:31.424136 | orchestrator | 2026-04-05 03:08:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:31.425980 | orchestrator | 2026-04-05 03:08:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:31.426089 | orchestrator | 2026-04-05 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:34.474919 | orchestrator | 2026-04-05 03:08:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:34.475601 | orchestrator | 2026-04-05 03:08:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:34.476010 | orchestrator | 2026-04-05 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:37.525154 | orchestrator | 2026-04-05 03:08:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:37.526259 | orchestrator | 2026-04-05 03:08:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:37.526344 | orchestrator | 2026-04-05 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:40.575543 | orchestrator | 2026-04-05 03:08:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:40.577571 | orchestrator | 2026-04-05 03:08:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:40.577629 | orchestrator | 2026-04-05 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:43.625762 | orchestrator | 2026-04-05 03:08:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:43.628375 | orchestrator | 2026-04-05 03:08:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:43.628437 | orchestrator | 2026-04-05 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:46.676969 | orchestrator | 2026-04-05 03:08:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:46.677565 | orchestrator | 2026-04-05 03:08:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:46.677598 | orchestrator | 2026-04-05 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:49.732159 | orchestrator | 2026-04-05 03:08:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:49.734766 | orchestrator | 2026-04-05 03:08:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:49.734825 | orchestrator | 2026-04-05 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:52.792178 | orchestrator | 2026-04-05 03:08:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:52.794523 | orchestrator | 2026-04-05 03:08:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:52.794608 | orchestrator | 2026-04-05 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:55.851130 | orchestrator | 2026-04-05 03:08:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:55.851416 | orchestrator | 2026-04-05 03:08:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:55.851455 | orchestrator | 2026-04-05 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:08:58.902365 | orchestrator | 2026-04-05 03:08:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:08:58.904128 | orchestrator | 2026-04-05 03:08:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:08:58.904181 | orchestrator | 2026-04-05 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:01.957584 | orchestrator | 2026-04-05 03:09:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:01.958972 | orchestrator | 2026-04-05 03:09:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:01.959024 | orchestrator | 2026-04-05 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:05.004520 | orchestrator | 2026-04-05 03:09:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:05.007338 | orchestrator | 2026-04-05 03:09:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:05.007440 | orchestrator | 2026-04-05 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:08.066587 | orchestrator | 2026-04-05 03:09:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:08.069356 | orchestrator | 2026-04-05 03:09:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:08.069426 | orchestrator | 2026-04-05 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:11.119465 | orchestrator | 2026-04-05 03:09:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:11.120333 | orchestrator | 2026-04-05 03:09:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:11.120368 | orchestrator | 2026-04-05 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:14.167362 | orchestrator | 2026-04-05 03:09:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:14.168674 | orchestrator | 2026-04-05 03:09:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:14.168720 | orchestrator | 2026-04-05 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:17.214151 | orchestrator | 2026-04-05 03:09:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:17.215345 | orchestrator | 2026-04-05 03:09:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:17.215399 | orchestrator | 2026-04-05 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:20.261575 | orchestrator | 2026-04-05 03:09:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:20.265527 | orchestrator | 2026-04-05 03:09:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:20.265618 | orchestrator | 2026-04-05 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:23.318505 | orchestrator | 2026-04-05 03:09:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:23.319749 | orchestrator | 2026-04-05 03:09:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:23.319779 | orchestrator | 2026-04-05 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:26.372829 | orchestrator | 2026-04-05 03:09:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:26.374576 | orchestrator | 2026-04-05 03:09:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:26.374629 | orchestrator | 2026-04-05 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:29.423008 | orchestrator | 2026-04-05 03:09:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:29.426159 | orchestrator | 2026-04-05 03:09:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:29.426256 | orchestrator | 2026-04-05 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:32.475358 | orchestrator | 2026-04-05 03:09:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:32.476905 | orchestrator | 2026-04-05 03:09:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:32.476999 | orchestrator | 2026-04-05 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:35.524772 | orchestrator | 2026-04-05 03:09:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:35.525703 | orchestrator | 2026-04-05 03:09:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:35.525738 | orchestrator | 2026-04-05 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:38.569004 | orchestrator | 2026-04-05 03:09:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:38.570425 | orchestrator | 2026-04-05 03:09:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:38.570502 | orchestrator | 2026-04-05 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:41.609068 | orchestrator | 2026-04-05 03:09:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:41.610576 | orchestrator | 2026-04-05 03:09:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:41.610634 | orchestrator | 2026-04-05 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:44.660709 | orchestrator | 2026-04-05 03:09:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:44.661700 | orchestrator | 2026-04-05 03:09:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:44.661777 | orchestrator | 2026-04-05 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:47.708551 | orchestrator | 2026-04-05 03:09:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:47.709494 | orchestrator | 2026-04-05 03:09:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:47.709538 | orchestrator | 2026-04-05 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:50.763447 | orchestrator | 2026-04-05 03:09:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:50.764970 | orchestrator | 2026-04-05 03:09:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:50.765120 | orchestrator | 2026-04-05 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:53.813172 | orchestrator | 2026-04-05 03:09:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:53.813600 | orchestrator | 2026-04-05 03:09:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:53.813635 | orchestrator | 2026-04-05 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:56.849458 | orchestrator | 2026-04-05 03:09:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:56.851475 | orchestrator | 2026-04-05 03:09:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:56.851527 | orchestrator | 2026-04-05 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:09:59.897342 | orchestrator | 2026-04-05 03:09:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:09:59.899200 | orchestrator | 2026-04-05 03:09:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:09:59.899334 | orchestrator | 2026-04-05 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:02.943064 | orchestrator | 2026-04-05 03:10:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:02.946216 | orchestrator | 2026-04-05 03:10:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:02.946316 | orchestrator | 2026-04-05 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:05.986084 | orchestrator | 2026-04-05 03:10:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:05.987155 | orchestrator | 2026-04-05 03:10:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:05.987209 | orchestrator | 2026-04-05 03:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:09.036769 | orchestrator | 2026-04-05 03:10:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:09.037248 | orchestrator | 2026-04-05 03:10:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:09.037459 | orchestrator | 2026-04-05 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:12.089357 | orchestrator | 2026-04-05 03:10:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:12.089459 | orchestrator | 2026-04-05 03:10:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:12.089473 | orchestrator | 2026-04-05 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:15.148021 | orchestrator | 2026-04-05 03:10:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:15.150527 | orchestrator | 2026-04-05 03:10:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:15.150640 | orchestrator | 2026-04-05 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:18.192077 | orchestrator | 2026-04-05 03:10:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:18.194160 | orchestrator | 2026-04-05 03:10:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:18.194227 | orchestrator | 2026-04-05 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:21.240236 | orchestrator | 2026-04-05 03:10:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:21.242607 | orchestrator | 2026-04-05 03:10:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:21.242667 | orchestrator | 2026-04-05 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:24.291921 | orchestrator | 2026-04-05 03:10:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:24.293479 | orchestrator | 2026-04-05 03:10:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:24.293522 | orchestrator | 2026-04-05 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:27.345089 | orchestrator | 2026-04-05 03:10:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:27.346157 | orchestrator | 2026-04-05 03:10:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:27.346245 | orchestrator | 2026-04-05 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:30.384785 | orchestrator | 2026-04-05 03:10:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:30.386064 | orchestrator | 2026-04-05 03:10:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:30.386107 | orchestrator | 2026-04-05 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:33.434680 | orchestrator | 2026-04-05 03:10:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:33.437571 | orchestrator | 2026-04-05 03:10:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:33.437642 | orchestrator | 2026-04-05 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:36.483854 | orchestrator | 2026-04-05 03:10:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:36.485835 | orchestrator | 2026-04-05 03:10:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:36.485900 | orchestrator | 2026-04-05 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:39.532314 | orchestrator | 2026-04-05 03:10:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:39.533926 | orchestrator | 2026-04-05 03:10:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:39.533949 | orchestrator | 2026-04-05 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:42.576731 | orchestrator | 2026-04-05 03:10:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:42.576835 | orchestrator | 2026-04-05 03:10:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:42.576850 | orchestrator | 2026-04-05 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:45.628328 | orchestrator | 2026-04-05 03:10:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:45.631068 | orchestrator | 2026-04-05 03:10:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:45.631136 | orchestrator | 2026-04-05 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:48.674856 | orchestrator | 2026-04-05 03:10:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:48.676311 | orchestrator | 2026-04-05 03:10:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:48.676357 | orchestrator | 2026-04-05 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:51.724910 | orchestrator | 2026-04-05 03:10:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:51.726328 | orchestrator | 2026-04-05 03:10:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:51.726398 | orchestrator | 2026-04-05 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:54.774213 | orchestrator | 2026-04-05 03:10:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:54.776174 | orchestrator | 2026-04-05 03:10:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:54.776232 | orchestrator | 2026-04-05 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:10:57.822673 | orchestrator | 2026-04-05 03:10:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:10:57.825733 | orchestrator | 2026-04-05 03:10:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:10:57.825839 | orchestrator | 2026-04-05 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:00.867441 | orchestrator | 2026-04-05 03:11:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:00.870147 | orchestrator | 2026-04-05 03:11:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:00.870208 | orchestrator | 2026-04-05 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:03.919040 | orchestrator | 2026-04-05 03:11:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:03.922584 | orchestrator | 2026-04-05 03:11:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:03.922708 | orchestrator | 2026-04-05 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:06.971834 | orchestrator | 2026-04-05 03:11:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:06.974143 | orchestrator | 2026-04-05 03:11:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:06.974283 | orchestrator | 2026-04-05 03:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:10.031370 | orchestrator | 2026-04-05 03:11:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:10.032622 | orchestrator | 2026-04-05 03:11:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:10.032680 | orchestrator | 2026-04-05 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:13.086547 | orchestrator | 2026-04-05 03:11:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:13.088948 | orchestrator | 2026-04-05 03:11:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:13.089021 | orchestrator | 2026-04-05 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:16.141780 | orchestrator | 2026-04-05 03:11:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:16.144422 | orchestrator | 2026-04-05 03:11:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:16.144704 | orchestrator | 2026-04-05 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:19.198337 | orchestrator | 2026-04-05 03:11:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:19.201783 | orchestrator | 2026-04-05 03:11:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:19.201829 | orchestrator | 2026-04-05 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:22.257314 | orchestrator | 2026-04-05 03:11:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:22.258752 | orchestrator | 2026-04-05 03:11:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:22.259038 | orchestrator | 2026-04-05 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:25.317074 | orchestrator | 2026-04-05 03:11:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:25.319534 | orchestrator | 2026-04-05 03:11:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:25.319594 | orchestrator | 2026-04-05 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:28.369974 | orchestrator | 2026-04-05 03:11:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:28.371010 | orchestrator | 2026-04-05 03:11:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:28.371080 | orchestrator | 2026-04-05 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:31.423825 | orchestrator | 2026-04-05 03:11:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:31.424838 | orchestrator | 2026-04-05 03:11:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:31.424892 | orchestrator | 2026-04-05 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:34.471329 | orchestrator | 2026-04-05 03:11:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:34.472542 | orchestrator | 2026-04-05 03:11:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:34.472576 | orchestrator | 2026-04-05 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:37.520776 | orchestrator | 2026-04-05 03:11:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:37.522594 | orchestrator | 2026-04-05 03:11:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:37.522668 | orchestrator | 2026-04-05 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:40.572378 | orchestrator | 2026-04-05 03:11:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:40.574464 | orchestrator | 2026-04-05 03:11:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:40.574542 | orchestrator | 2026-04-05 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:43.621317 | orchestrator | 2026-04-05 03:11:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:43.622717 | orchestrator | 2026-04-05 03:11:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:43.622755 | orchestrator | 2026-04-05 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:46.676951 | orchestrator | 2026-04-05 03:11:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:46.678772 | orchestrator | 2026-04-05 03:11:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:46.678812 | orchestrator | 2026-04-05 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:49.735196 | orchestrator | 2026-04-05 03:11:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:49.735731 | orchestrator | 2026-04-05 03:11:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:49.736162 | orchestrator | 2026-04-05 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:52.793738 | orchestrator | 2026-04-05 03:11:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:52.796077 | orchestrator | 2026-04-05 03:11:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:52.796138 | orchestrator | 2026-04-05 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:55.842725 | orchestrator | 2026-04-05 03:11:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:55.844332 | orchestrator | 2026-04-05 03:11:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:55.844427 | orchestrator | 2026-04-05 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:11:58.900907 | orchestrator | 2026-04-05 03:11:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:11:58.901880 | orchestrator | 2026-04-05 03:11:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:11:58.902452 | orchestrator | 2026-04-05 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:01.956574 | orchestrator | 2026-04-05 03:12:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:01.957783 | orchestrator | 2026-04-05 03:12:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:01.957996 | orchestrator | 2026-04-05 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:05.005126 | orchestrator | 2026-04-05 03:12:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:05.006551 | orchestrator | 2026-04-05 03:12:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:05.006627 | orchestrator | 2026-04-05 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:08.059603 | orchestrator | 2026-04-05 03:12:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:08.061763 | orchestrator | 2026-04-05 03:12:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:08.061836 | orchestrator | 2026-04-05 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:11.110647 | orchestrator | 2026-04-05 03:12:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:11.112571 | orchestrator | 2026-04-05 03:12:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:11.112620 | orchestrator | 2026-04-05 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:14.166702 | orchestrator | 2026-04-05 03:12:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:14.169455 | orchestrator | 2026-04-05 03:12:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:14.169526 | orchestrator | 2026-04-05 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:17.221015 | orchestrator | 2026-04-05 03:12:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:17.222797 | orchestrator | 2026-04-05 03:12:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:17.222836 | orchestrator | 2026-04-05 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:20.272858 | orchestrator | 2026-04-05 03:12:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:20.274564 | orchestrator | 2026-04-05 03:12:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:20.274660 | orchestrator | 2026-04-05 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:23.325682 | orchestrator | 2026-04-05 03:12:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:23.327899 | orchestrator | 2026-04-05 03:12:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:23.327980 | orchestrator | 2026-04-05 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:26.381032 | orchestrator | 2026-04-05 03:12:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:26.382960 | orchestrator | 2026-04-05 03:12:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:26.383044 | orchestrator | 2026-04-05 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:29.437765 | orchestrator | 2026-04-05 03:12:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:29.438550 | orchestrator | 2026-04-05 03:12:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:29.438600 | orchestrator | 2026-04-05 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:32.489646 | orchestrator | 2026-04-05 03:12:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:32.491581 | orchestrator | 2026-04-05 03:12:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:32.491647 | orchestrator | 2026-04-05 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:35.544354 | orchestrator | 2026-04-05 03:12:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:35.545600 | orchestrator | 2026-04-05 03:12:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:35.545640 | orchestrator | 2026-04-05 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:38.589643 | orchestrator | 2026-04-05 03:12:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:38.591342 | orchestrator | 2026-04-05 03:12:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:38.591397 | orchestrator | 2026-04-05 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:41.640171 | orchestrator | 2026-04-05 03:12:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:41.641798 | orchestrator | 2026-04-05 03:12:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:41.641846 | orchestrator | 2026-04-05 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:44.688652 | orchestrator | 2026-04-05 03:12:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:44.688993 | orchestrator | 2026-04-05 03:12:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:44.689017 | orchestrator | 2026-04-05 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:47.743991 | orchestrator | 2026-04-05 03:12:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:47.746345 | orchestrator | 2026-04-05 03:12:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:47.746426 | orchestrator | 2026-04-05 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:50.797180 | orchestrator | 2026-04-05 03:12:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:50.799438 | orchestrator | 2026-04-05 03:12:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:50.799492 | orchestrator | 2026-04-05 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:53.849879 | orchestrator | 2026-04-05 03:12:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:53.852300 | orchestrator | 2026-04-05 03:12:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:53.852361 | orchestrator | 2026-04-05 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:56.913625 | orchestrator | 2026-04-05 03:12:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:56.914323 | orchestrator | 2026-04-05 03:12:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:56.914413 | orchestrator | 2026-04-05 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:12:59.965076 | orchestrator | 2026-04-05 03:12:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:12:59.966240 | orchestrator | 2026-04-05 03:12:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:12:59.966290 | orchestrator | 2026-04-05 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:03.017258 | orchestrator | 2026-04-05 03:13:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:03.017356 | orchestrator | 2026-04-05 03:13:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:03.017371 | orchestrator | 2026-04-05 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:06.063999 | orchestrator | 2026-04-05 03:13:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:06.066927 | orchestrator | 2026-04-05 03:13:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:06.066997 | orchestrator | 2026-04-05 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:09.117511 | orchestrator | 2026-04-05 03:13:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:09.121534 | orchestrator | 2026-04-05 03:13:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:09.121855 | orchestrator | 2026-04-05 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:12.169140 | orchestrator | 2026-04-05 03:13:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:12.171299 | orchestrator | 2026-04-05 03:13:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:12.171410 | orchestrator | 2026-04-05 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:15.215143 | orchestrator | 2026-04-05 03:13:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:15.217025 | orchestrator | 2026-04-05 03:13:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:15.217061 | orchestrator | 2026-04-05 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:18.264067 | orchestrator | 2026-04-05 03:13:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:18.265301 | orchestrator | 2026-04-05 03:13:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:18.265350 | orchestrator | 2026-04-05 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:21.308750 | orchestrator | 2026-04-05 03:13:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:21.310784 | orchestrator | 2026-04-05 03:13:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:21.310852 | orchestrator | 2026-04-05 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:24.363528 | orchestrator | 2026-04-05 03:13:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:24.365729 | orchestrator | 2026-04-05 03:13:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:24.365807 | orchestrator | 2026-04-05 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:27.411988 | orchestrator | 2026-04-05 03:13:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:27.413509 | orchestrator | 2026-04-05 03:13:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:27.413568 | orchestrator | 2026-04-05 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:30.456419 | orchestrator | 2026-04-05 03:13:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:30.457797 | orchestrator | 2026-04-05 03:13:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:30.457873 | orchestrator | 2026-04-05 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:33.490632 | orchestrator | 2026-04-05 03:13:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:33.492595 | orchestrator | 2026-04-05 03:13:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:33.492666 | orchestrator | 2026-04-05 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:36.538091 | orchestrator | 2026-04-05 03:13:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:36.539412 | orchestrator | 2026-04-05 03:13:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:36.539558 | orchestrator | 2026-04-05 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:39.588923 | orchestrator | 2026-04-05 03:13:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:39.590849 | orchestrator | 2026-04-05 03:13:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:39.590996 | orchestrator | 2026-04-05 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:42.637558 | orchestrator | 2026-04-05 03:13:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:42.639137 | orchestrator | 2026-04-05 03:13:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:42.639337 | orchestrator | 2026-04-05 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:45.693011 | orchestrator | 2026-04-05 03:13:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:45.695082 | orchestrator | 2026-04-05 03:13:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:45.695217 | orchestrator | 2026-04-05 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:48.744957 | orchestrator | 2026-04-05 03:13:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:48.746128 | orchestrator | 2026-04-05 03:13:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:48.746262 | orchestrator | 2026-04-05 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:51.789486 | orchestrator | 2026-04-05 03:13:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:51.790835 | orchestrator | 2026-04-05 03:13:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:51.790882 | orchestrator | 2026-04-05 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:54.830920 | orchestrator | 2026-04-05 03:13:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:54.831434 | orchestrator | 2026-04-05 03:13:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:54.831481 | orchestrator | 2026-04-05 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:13:57.880382 | orchestrator | 2026-04-05 03:13:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:13:57.882698 | orchestrator | 2026-04-05 03:13:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:13:57.882761 | orchestrator | 2026-04-05 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:00.928056 | orchestrator | 2026-04-05 03:14:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:00.928247 | orchestrator | 2026-04-05 03:14:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:00.928273 | orchestrator | 2026-04-05 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:03.979458 | orchestrator | 2026-04-05 03:14:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:03.980729 | orchestrator | 2026-04-05 03:14:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:03.980830 | orchestrator | 2026-04-05 03:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:07.032432 | orchestrator | 2026-04-05 03:14:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:07.033911 | orchestrator | 2026-04-05 03:14:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:07.033956 | orchestrator | 2026-04-05 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:10.081311 | orchestrator | 2026-04-05 03:14:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:10.082405 | orchestrator | 2026-04-05 03:14:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:10.082606 | orchestrator | 2026-04-05 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:13.133627 | orchestrator | 2026-04-05 03:14:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:13.134683 | orchestrator | 2026-04-05 03:14:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:13.134774 | orchestrator | 2026-04-05 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:16.185312 | orchestrator | 2026-04-05 03:14:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:16.187412 | orchestrator | 2026-04-05 03:14:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:16.187479 | orchestrator | 2026-04-05 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:19.235658 | orchestrator | 2026-04-05 03:14:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:19.236472 | orchestrator | 2026-04-05 03:14:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:19.236517 | orchestrator | 2026-04-05 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:22.286272 | orchestrator | 2026-04-05 03:14:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:22.288195 | orchestrator | 2026-04-05 03:14:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:22.288254 | orchestrator | 2026-04-05 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:25.345068 | orchestrator | 2026-04-05 03:14:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:25.346141 | orchestrator | 2026-04-05 03:14:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:25.346999 | orchestrator | 2026-04-05 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:28.399559 | orchestrator | 2026-04-05 03:14:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:28.400216 | orchestrator | 2026-04-05 03:14:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:28.400254 | orchestrator | 2026-04-05 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:31.437806 | orchestrator | 2026-04-05 03:14:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:31.438496 | orchestrator | 2026-04-05 03:14:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:31.438558 | orchestrator | 2026-04-05 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:34.496671 | orchestrator | 2026-04-05 03:14:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:34.498577 | orchestrator | 2026-04-05 03:14:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:34.498654 | orchestrator | 2026-04-05 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:37.550430 | orchestrator | 2026-04-05 03:14:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:37.552207 | orchestrator | 2026-04-05 03:14:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:37.552374 | orchestrator | 2026-04-05 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:40.598964 | orchestrator | 2026-04-05 03:14:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:40.600596 | orchestrator | 2026-04-05 03:14:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:40.600635 | orchestrator | 2026-04-05 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:43.650400 | orchestrator | 2026-04-05 03:14:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:43.651953 | orchestrator | 2026-04-05 03:14:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:43.652011 | orchestrator | 2026-04-05 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:46.705939 | orchestrator | 2026-04-05 03:14:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:46.707683 | orchestrator | 2026-04-05 03:14:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:46.707712 | orchestrator | 2026-04-05 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:49.758587 | orchestrator | 2026-04-05 03:14:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:49.759215 | orchestrator | 2026-04-05 03:14:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:49.759257 | orchestrator | 2026-04-05 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:52.805109 | orchestrator | 2026-04-05 03:14:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:52.806829 | orchestrator | 2026-04-05 03:14:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:52.806942 | orchestrator | 2026-04-05 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:55.857864 | orchestrator | 2026-04-05 03:14:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:55.859888 | orchestrator | 2026-04-05 03:14:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:55.859963 | orchestrator | 2026-04-05 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:14:58.898091 | orchestrator | 2026-04-05 03:14:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:14:58.898703 | orchestrator | 2026-04-05 03:14:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:14:58.899179 | orchestrator | 2026-04-05 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:01.950594 | orchestrator | 2026-04-05 03:15:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:01.952491 | orchestrator | 2026-04-05 03:15:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:01.952553 | orchestrator | 2026-04-05 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:05.003032 | orchestrator | 2026-04-05 03:15:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:05.005619 | orchestrator | 2026-04-05 03:15:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:05.005688 | orchestrator | 2026-04-05 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:08.051948 | orchestrator | 2026-04-05 03:15:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:08.053421 | orchestrator | 2026-04-05 03:15:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:08.053465 | orchestrator | 2026-04-05 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:11.111741 | orchestrator | 2026-04-05 03:15:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:11.114476 | orchestrator | 2026-04-05 03:15:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:11.114660 | orchestrator | 2026-04-05 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:14.162462 | orchestrator | 2026-04-05 03:15:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:14.165705 | orchestrator | 2026-04-05 03:15:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:14.165770 | orchestrator | 2026-04-05 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:17.223677 | orchestrator | 2026-04-05 03:15:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:17.225408 | orchestrator | 2026-04-05 03:15:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:17.225537 | orchestrator | 2026-04-05 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:20.278909 | orchestrator | 2026-04-05 03:15:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:20.280232 | orchestrator | 2026-04-05 03:15:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:20.280287 | orchestrator | 2026-04-05 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:23.328507 | orchestrator | 2026-04-05 03:15:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:23.329636 | orchestrator | 2026-04-05 03:15:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:23.330498 | orchestrator | 2026-04-05 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:26.384811 | orchestrator | 2026-04-05 03:15:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:26.386976 | orchestrator | 2026-04-05 03:15:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:26.387050 | orchestrator | 2026-04-05 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:29.437019 | orchestrator | 2026-04-05 03:15:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:29.438141 | orchestrator | 2026-04-05 03:15:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:29.438230 | orchestrator | 2026-04-05 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:32.489544 | orchestrator | 2026-04-05 03:15:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:32.491627 | orchestrator | 2026-04-05 03:15:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:32.491725 | orchestrator | 2026-04-05 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:35.542674 | orchestrator | 2026-04-05 03:15:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:35.544020 | orchestrator | 2026-04-05 03:15:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:35.544077 | orchestrator | 2026-04-05 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:38.597514 | orchestrator | 2026-04-05 03:15:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:38.598446 | orchestrator | 2026-04-05 03:15:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:38.598521 | orchestrator | 2026-04-05 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:41.645382 | orchestrator | 2026-04-05 03:15:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:41.646205 | orchestrator | 2026-04-05 03:15:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:41.646262 | orchestrator | 2026-04-05 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:44.690940 | orchestrator | 2026-04-05 03:15:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:44.693543 | orchestrator | 2026-04-05 03:15:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:44.693622 | orchestrator | 2026-04-05 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:47.742115 | orchestrator | 2026-04-05 03:15:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:47.743416 | orchestrator | 2026-04-05 03:15:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:47.743495 | orchestrator | 2026-04-05 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:50.799401 | orchestrator | 2026-04-05 03:15:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:50.801259 | orchestrator | 2026-04-05 03:15:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:50.801289 | orchestrator | 2026-04-05 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:53.845863 | orchestrator | 2026-04-05 03:15:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:53.847249 | orchestrator | 2026-04-05 03:15:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:53.847356 | orchestrator | 2026-04-05 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:56.897538 | orchestrator | 2026-04-05 03:15:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:56.900449 | orchestrator | 2026-04-05 03:15:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:56.900506 | orchestrator | 2026-04-05 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:15:59.952603 | orchestrator | 2026-04-05 03:15:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:15:59.954732 | orchestrator | 2026-04-05 03:15:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:15:59.954771 | orchestrator | 2026-04-05 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:03.005473 | orchestrator | 2026-04-05 03:16:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:03.010284 | orchestrator | 2026-04-05 03:16:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:03.010940 | orchestrator | 2026-04-05 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:06.056328 | orchestrator | 2026-04-05 03:16:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:06.058115 | orchestrator | 2026-04-05 03:16:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:06.058266 | orchestrator | 2026-04-05 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:09.105340 | orchestrator | 2026-04-05 03:16:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:09.107095 | orchestrator | 2026-04-05 03:16:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:09.107145 | orchestrator | 2026-04-05 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:12.155075 | orchestrator | 2026-04-05 03:16:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:12.157218 | orchestrator | 2026-04-05 03:16:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:12.157302 | orchestrator | 2026-04-05 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:15.200372 | orchestrator | 2026-04-05 03:16:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:15.201099 | orchestrator | 2026-04-05 03:16:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:15.201154 | orchestrator | 2026-04-05 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:18.246094 | orchestrator | 2026-04-05 03:16:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:18.246867 | orchestrator | 2026-04-05 03:16:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:18.246889 | orchestrator | 2026-04-05 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:21.292337 | orchestrator | 2026-04-05 03:16:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:21.298352 | orchestrator | 2026-04-05 03:16:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:21.298436 | orchestrator | 2026-04-05 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:24.343282 | orchestrator | 2026-04-05 03:16:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:24.343892 | orchestrator | 2026-04-05 03:16:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:24.343925 | orchestrator | 2026-04-05 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:27.386736 | orchestrator | 2026-04-05 03:16:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:27.387780 | orchestrator | 2026-04-05 03:16:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:27.387831 | orchestrator | 2026-04-05 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:30.432385 | orchestrator | 2026-04-05 03:16:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:30.434298 | orchestrator | 2026-04-05 03:16:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:30.434344 | orchestrator | 2026-04-05 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:33.481292 | orchestrator | 2026-04-05 03:16:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:33.482955 | orchestrator | 2026-04-05 03:16:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:33.483021 | orchestrator | 2026-04-05 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:36.531224 | orchestrator | 2026-04-05 03:16:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:36.533839 | orchestrator | 2026-04-05 03:16:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:36.533907 | orchestrator | 2026-04-05 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:39.576738 | orchestrator | 2026-04-05 03:16:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:39.576820 | orchestrator | 2026-04-05 03:16:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:39.576831 | orchestrator | 2026-04-05 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:42.622899 | orchestrator | 2026-04-05 03:16:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:42.623367 | orchestrator | 2026-04-05 03:16:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:42.623406 | orchestrator | 2026-04-05 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:45.675672 | orchestrator | 2026-04-05 03:16:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:45.677738 | orchestrator | 2026-04-05 03:16:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:45.677839 | orchestrator | 2026-04-05 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:48.722554 | orchestrator | 2026-04-05 03:16:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:48.723235 | orchestrator | 2026-04-05 03:16:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:48.723263 | orchestrator | 2026-04-05 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:51.778137 | orchestrator | 2026-04-05 03:16:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:51.780909 | orchestrator | 2026-04-05 03:16:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:51.780949 | orchestrator | 2026-04-05 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:54.831442 | orchestrator | 2026-04-05 03:16:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:54.835595 | orchestrator | 2026-04-05 03:16:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:54.835675 | orchestrator | 2026-04-05 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:16:57.885544 | orchestrator | 2026-04-05 03:16:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:16:57.888034 | orchestrator | 2026-04-05 03:16:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:16:57.888090 | orchestrator | 2026-04-05 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:00.940485 | orchestrator | 2026-04-05 03:17:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:00.942449 | orchestrator | 2026-04-05 03:17:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:00.942507 | orchestrator | 2026-04-05 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:03.992354 | orchestrator | 2026-04-05 03:17:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:03.994730 | orchestrator | 2026-04-05 03:17:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:03.994794 | orchestrator | 2026-04-05 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:07.046323 | orchestrator | 2026-04-05 03:17:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:07.049100 | orchestrator | 2026-04-05 03:17:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:07.049235 | orchestrator | 2026-04-05 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:10.099335 | orchestrator | 2026-04-05 03:17:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:10.100772 | orchestrator | 2026-04-05 03:17:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:10.100854 | orchestrator | 2026-04-05 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:13.151199 | orchestrator | 2026-04-05 03:17:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:13.152960 | orchestrator | 2026-04-05 03:17:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:13.152999 | orchestrator | 2026-04-05 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:16.199810 | orchestrator | 2026-04-05 03:17:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:16.202243 | orchestrator | 2026-04-05 03:17:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:16.202320 | orchestrator | 2026-04-05 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:19.255207 | orchestrator | 2026-04-05 03:17:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:19.256914 | orchestrator | 2026-04-05 03:17:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:19.256967 | orchestrator | 2026-04-05 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:22.313220 | orchestrator | 2026-04-05 03:17:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:22.314112 | orchestrator | 2026-04-05 03:17:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:22.314195 | orchestrator | 2026-04-05 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:25.358079 | orchestrator | 2026-04-05 03:17:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:25.359424 | orchestrator | 2026-04-05 03:17:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:25.359473 | orchestrator | 2026-04-05 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:28.416065 | orchestrator | 2026-04-05 03:17:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:28.419610 | orchestrator | 2026-04-05 03:17:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:28.419683 | orchestrator | 2026-04-05 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:31.461743 | orchestrator | 2026-04-05 03:17:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:31.461923 | orchestrator | 2026-04-05 03:17:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:31.461944 | orchestrator | 2026-04-05 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:34.506592 | orchestrator | 2026-04-05 03:17:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:34.508956 | orchestrator | 2026-04-05 03:17:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:34.509017 | orchestrator | 2026-04-05 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:37.553348 | orchestrator | 2026-04-05 03:17:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:37.553582 | orchestrator | 2026-04-05 03:17:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:37.553607 | orchestrator | 2026-04-05 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:40.599370 | orchestrator | 2026-04-05 03:17:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:40.600709 | orchestrator | 2026-04-05 03:17:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:40.600976 | orchestrator | 2026-04-05 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:43.648258 | orchestrator | 2026-04-05 03:17:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:43.649659 | orchestrator | 2026-04-05 03:17:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:43.649747 | orchestrator | 2026-04-05 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:46.708665 | orchestrator | 2026-04-05 03:17:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:46.710448 | orchestrator | 2026-04-05 03:17:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:46.710521 | orchestrator | 2026-04-05 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:49.759324 | orchestrator | 2026-04-05 03:17:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:49.761179 | orchestrator | 2026-04-05 03:17:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:49.761355 | orchestrator | 2026-04-05 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:52.811620 | orchestrator | 2026-04-05 03:17:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:52.812966 | orchestrator | 2026-04-05 03:17:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:52.813029 | orchestrator | 2026-04-05 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:55.862110 | orchestrator | 2026-04-05 03:17:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:55.863898 | orchestrator | 2026-04-05 03:17:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:55.863968 | orchestrator | 2026-04-05 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:17:58.907267 | orchestrator | 2026-04-05 03:17:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:17:58.907888 | orchestrator | 2026-04-05 03:17:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:17:58.907926 | orchestrator | 2026-04-05 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:01.960245 | orchestrator | 2026-04-05 03:18:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:01.961602 | orchestrator | 2026-04-05 03:18:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:01.961666 | orchestrator | 2026-04-05 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:05.010419 | orchestrator | 2026-04-05 03:18:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:05.011860 | orchestrator | 2026-04-05 03:18:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:05.011900 | orchestrator | 2026-04-05 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:08.058407 | orchestrator | 2026-04-05 03:18:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:08.059921 | orchestrator | 2026-04-05 03:18:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:08.059948 | orchestrator | 2026-04-05 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:11.107806 | orchestrator | 2026-04-05 03:18:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:11.109344 | orchestrator | 2026-04-05 03:18:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:11.109389 | orchestrator | 2026-04-05 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:14.153367 | orchestrator | 2026-04-05 03:18:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:14.155956 | orchestrator | 2026-04-05 03:18:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:14.156241 | orchestrator | 2026-04-05 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:17.207084 | orchestrator | 2026-04-05 03:18:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:17.208926 | orchestrator | 2026-04-05 03:18:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:17.209005 | orchestrator | 2026-04-05 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:20.247298 | orchestrator | 2026-04-05 03:18:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:20.248114 | orchestrator | 2026-04-05 03:18:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:20.248164 | orchestrator | 2026-04-05 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:23.294601 | orchestrator | 2026-04-05 03:18:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:23.296083 | orchestrator | 2026-04-05 03:18:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:23.296125 | orchestrator | 2026-04-05 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:26.343295 | orchestrator | 2026-04-05 03:18:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:26.345739 | orchestrator | 2026-04-05 03:18:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:26.345778 | orchestrator | 2026-04-05 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:29.396321 | orchestrator | 2026-04-05 03:18:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:29.399040 | orchestrator | 2026-04-05 03:18:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:29.399106 | orchestrator | 2026-04-05 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:32.439848 | orchestrator | 2026-04-05 03:18:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:32.440752 | orchestrator | 2026-04-05 03:18:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:32.440803 | orchestrator | 2026-04-05 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:35.485992 | orchestrator | 2026-04-05 03:18:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:35.488219 | orchestrator | 2026-04-05 03:18:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:35.488258 | orchestrator | 2026-04-05 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:38.544266 | orchestrator | 2026-04-05 03:18:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:38.545872 | orchestrator | 2026-04-05 03:18:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:38.546199 | orchestrator | 2026-04-05 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:41.591228 | orchestrator | 2026-04-05 03:18:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:41.591413 | orchestrator | 2026-04-05 03:18:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:41.592926 | orchestrator | 2026-04-05 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:44.638707 | orchestrator | 2026-04-05 03:18:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:44.640757 | orchestrator | 2026-04-05 03:18:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:44.640831 | orchestrator | 2026-04-05 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:47.689731 | orchestrator | 2026-04-05 03:18:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:47.691593 | orchestrator | 2026-04-05 03:18:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:47.691687 | orchestrator | 2026-04-05 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:50.743255 | orchestrator | 2026-04-05 03:18:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:50.744921 | orchestrator | 2026-04-05 03:18:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:50.745050 | orchestrator | 2026-04-05 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:53.795309 | orchestrator | 2026-04-05 03:18:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:53.796638 | orchestrator | 2026-04-05 03:18:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:53.796687 | orchestrator | 2026-04-05 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:56.845547 | orchestrator | 2026-04-05 03:18:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:56.847249 | orchestrator | 2026-04-05 03:18:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:56.847295 | orchestrator | 2026-04-05 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:18:59.895781 | orchestrator | 2026-04-05 03:18:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:18:59.897571 | orchestrator | 2026-04-05 03:18:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:18:59.897968 | orchestrator | 2026-04-05 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:02.946290 | orchestrator | 2026-04-05 03:19:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:02.947236 | orchestrator | 2026-04-05 03:19:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:02.947286 | orchestrator | 2026-04-05 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:05.993225 | orchestrator | 2026-04-05 03:19:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:05.995235 | orchestrator | 2026-04-05 03:19:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:05.995317 | orchestrator | 2026-04-05 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:09.044992 | orchestrator | 2026-04-05 03:19:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:09.046222 | orchestrator | 2026-04-05 03:19:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:09.046329 | orchestrator | 2026-04-05 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:12.093410 | orchestrator | 2026-04-05 03:19:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:12.094897 | orchestrator | 2026-04-05 03:19:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:12.095001 | orchestrator | 2026-04-05 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:15.148911 | orchestrator | 2026-04-05 03:19:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:15.150259 | orchestrator | 2026-04-05 03:19:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:15.150313 | orchestrator | 2026-04-05 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:18.200861 | orchestrator | 2026-04-05 03:19:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:18.204237 | orchestrator | 2026-04-05 03:19:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:18.204391 | orchestrator | 2026-04-05 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:21.256604 | orchestrator | 2026-04-05 03:19:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:21.257569 | orchestrator | 2026-04-05 03:19:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:21.257606 | orchestrator | 2026-04-05 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:24.305524 | orchestrator | 2026-04-05 03:19:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:24.307051 | orchestrator | 2026-04-05 03:19:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:24.307109 | orchestrator | 2026-04-05 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:27.354659 | orchestrator | 2026-04-05 03:19:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:27.355362 | orchestrator | 2026-04-05 03:19:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:27.355406 | orchestrator | 2026-04-05 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:30.402675 | orchestrator | 2026-04-05 03:19:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:30.403295 | orchestrator | 2026-04-05 03:19:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:30.403332 | orchestrator | 2026-04-05 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:33.454394 | orchestrator | 2026-04-05 03:19:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:33.456818 | orchestrator | 2026-04-05 03:19:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:33.456876 | orchestrator | 2026-04-05 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:36.514752 | orchestrator | 2026-04-05 03:19:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:36.516106 | orchestrator | 2026-04-05 03:19:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:36.516181 | orchestrator | 2026-04-05 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:39.556504 | orchestrator | 2026-04-05 03:19:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:39.557829 | orchestrator | 2026-04-05 03:19:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:39.557863 | orchestrator | 2026-04-05 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:42.612273 | orchestrator | 2026-04-05 03:19:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:42.612853 | orchestrator | 2026-04-05 03:19:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:42.612995 | orchestrator | 2026-04-05 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:45.664717 | orchestrator | 2026-04-05 03:19:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:45.665899 | orchestrator | 2026-04-05 03:19:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:45.665939 | orchestrator | 2026-04-05 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:48.715371 | orchestrator | 2026-04-05 03:19:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:48.716342 | orchestrator | 2026-04-05 03:19:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:48.716398 | orchestrator | 2026-04-05 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:51.764890 | orchestrator | 2026-04-05 03:19:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:51.766605 | orchestrator | 2026-04-05 03:19:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:51.766650 | orchestrator | 2026-04-05 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:54.813192 | orchestrator | 2026-04-05 03:19:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:54.815364 | orchestrator | 2026-04-05 03:19:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:54.815420 | orchestrator | 2026-04-05 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:19:57.858645 | orchestrator | 2026-04-05 03:19:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:19:57.861378 | orchestrator | 2026-04-05 03:19:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:19:57.861430 | orchestrator | 2026-04-05 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:00.913130 | orchestrator | 2026-04-05 03:20:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:00.914522 | orchestrator | 2026-04-05 03:20:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:00.914680 | orchestrator | 2026-04-05 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:03.959821 | orchestrator | 2026-04-05 03:20:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:03.962839 | orchestrator | 2026-04-05 03:20:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:03.962922 | orchestrator | 2026-04-05 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:07.016427 | orchestrator | 2026-04-05 03:20:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:07.017969 | orchestrator | 2026-04-05 03:20:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:07.018264 | orchestrator | 2026-04-05 03:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:10.057876 | orchestrator | 2026-04-05 03:20:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:10.059578 | orchestrator | 2026-04-05 03:20:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:10.059939 | orchestrator | 2026-04-05 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:13.104189 | orchestrator | 2026-04-05 03:20:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:13.105855 | orchestrator | 2026-04-05 03:20:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:13.105882 | orchestrator | 2026-04-05 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:16.149695 | orchestrator | 2026-04-05 03:20:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:16.150182 | orchestrator | 2026-04-05 03:20:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:16.150226 | orchestrator | 2026-04-05 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:19.198453 | orchestrator | 2026-04-05 03:20:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:19.200926 | orchestrator | 2026-04-05 03:20:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:19.201025 | orchestrator | 2026-04-05 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:22.260560 | orchestrator | 2026-04-05 03:20:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:22.263506 | orchestrator | 2026-04-05 03:20:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:22.263652 | orchestrator | 2026-04-05 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:25.307729 | orchestrator | 2026-04-05 03:20:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:25.310290 | orchestrator | 2026-04-05 03:20:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:25.310353 | orchestrator | 2026-04-05 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:28.361491 | orchestrator | 2026-04-05 03:20:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:28.363492 | orchestrator | 2026-04-05 03:20:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:28.363557 | orchestrator | 2026-04-05 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:31.407269 | orchestrator | 2026-04-05 03:20:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:31.408566 | orchestrator | 2026-04-05 03:20:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:31.408622 | orchestrator | 2026-04-05 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:34.454208 | orchestrator | 2026-04-05 03:20:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:34.455959 | orchestrator | 2026-04-05 03:20:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:34.456027 | orchestrator | 2026-04-05 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:37.508650 | orchestrator | 2026-04-05 03:20:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:37.510599 | orchestrator | 2026-04-05 03:20:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:37.510712 | orchestrator | 2026-04-05 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:40.557276 | orchestrator | 2026-04-05 03:20:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:40.558528 | orchestrator | 2026-04-05 03:20:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:40.558556 | orchestrator | 2026-04-05 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:43.613289 | orchestrator | 2026-04-05 03:20:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:43.616368 | orchestrator | 2026-04-05 03:20:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:43.616421 | orchestrator | 2026-04-05 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:46.668168 | orchestrator | 2026-04-05 03:20:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:46.670289 | orchestrator | 2026-04-05 03:20:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:46.670343 | orchestrator | 2026-04-05 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:49.718203 | orchestrator | 2026-04-05 03:20:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:49.719900 | orchestrator | 2026-04-05 03:20:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:49.720007 | orchestrator | 2026-04-05 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:52.770979 | orchestrator | 2026-04-05 03:20:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:52.771724 | orchestrator | 2026-04-05 03:20:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:52.771774 | orchestrator | 2026-04-05 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:55.822280 | orchestrator | 2026-04-05 03:20:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:55.824094 | orchestrator | 2026-04-05 03:20:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:55.824192 | orchestrator | 2026-04-05 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:20:58.874596 | orchestrator | 2026-04-05 03:20:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:20:58.876214 | orchestrator | 2026-04-05 03:20:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:20:58.876272 | orchestrator | 2026-04-05 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:01.916767 | orchestrator | 2026-04-05 03:21:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:01.918292 | orchestrator | 2026-04-05 03:21:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:01.918374 | orchestrator | 2026-04-05 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:04.972476 | orchestrator | 2026-04-05 03:21:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:04.973629 | orchestrator | 2026-04-05 03:21:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:04.973669 | orchestrator | 2026-04-05 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:08.024112 | orchestrator | 2026-04-05 03:21:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:08.027124 | orchestrator | 2026-04-05 03:21:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:08.027205 | orchestrator | 2026-04-05 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:11.075143 | orchestrator | 2026-04-05 03:21:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:11.078744 | orchestrator | 2026-04-05 03:21:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:11.078841 | orchestrator | 2026-04-05 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:14.131076 | orchestrator | 2026-04-05 03:21:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:14.133062 | orchestrator | 2026-04-05 03:21:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:14.133124 | orchestrator | 2026-04-05 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:17.181161 | orchestrator | 2026-04-05 03:21:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:17.182063 | orchestrator | 2026-04-05 03:21:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:17.182129 | orchestrator | 2026-04-05 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:20.232526 | orchestrator | 2026-04-05 03:21:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:20.232653 | orchestrator | 2026-04-05 03:21:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:20.232730 | orchestrator | 2026-04-05 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:23.292093 | orchestrator | 2026-04-05 03:21:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:23.292984 | orchestrator | 2026-04-05 03:21:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:23.293378 | orchestrator | 2026-04-05 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:26.352019 | orchestrator | 2026-04-05 03:21:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:26.353511 | orchestrator | 2026-04-05 03:21:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:26.353567 | orchestrator | 2026-04-05 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:29.412403 | orchestrator | 2026-04-05 03:21:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:29.412603 | orchestrator | 2026-04-05 03:21:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:29.412626 | orchestrator | 2026-04-05 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:32.462001 | orchestrator | 2026-04-05 03:21:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:32.465292 | orchestrator | 2026-04-05 03:21:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:32.465353 | orchestrator | 2026-04-05 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:35.519642 | orchestrator | 2026-04-05 03:21:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:35.521818 | orchestrator | 2026-04-05 03:21:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:35.521989 | orchestrator | 2026-04-05 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:38.578200 | orchestrator | 2026-04-05 03:21:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:38.581179 | orchestrator | 2026-04-05 03:21:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:38.581254 | orchestrator | 2026-04-05 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:41.626572 | orchestrator | 2026-04-05 03:21:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:41.629674 | orchestrator | 2026-04-05 03:21:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:41.629741 | orchestrator | 2026-04-05 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:44.690404 | orchestrator | 2026-04-05 03:21:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:44.692468 | orchestrator | 2026-04-05 03:21:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:44.693024 | orchestrator | 2026-04-05 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:47.736785 | orchestrator | 2026-04-05 03:21:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:47.739531 | orchestrator | 2026-04-05 03:21:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:47.739610 | orchestrator | 2026-04-05 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:50.783138 | orchestrator | 2026-04-05 03:21:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:50.787323 | orchestrator | 2026-04-05 03:21:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:50.787409 | orchestrator | 2026-04-05 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:53.832441 | orchestrator | 2026-04-05 03:21:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:53.835161 | orchestrator | 2026-04-05 03:21:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:53.835233 | orchestrator | 2026-04-05 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:56.886476 | orchestrator | 2026-04-05 03:21:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:56.888332 | orchestrator | 2026-04-05 03:21:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:56.888369 | orchestrator | 2026-04-05 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:21:59.940710 | orchestrator | 2026-04-05 03:21:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:21:59.945948 | orchestrator | 2026-04-05 03:21:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:21:59.946102 | orchestrator | 2026-04-05 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:02.998136 | orchestrator | 2026-04-05 03:22:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:03.000403 | orchestrator | 2026-04-05 03:22:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:03.000487 | orchestrator | 2026-04-05 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:06.056341 | orchestrator | 2026-04-05 03:22:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:06.061189 | orchestrator | 2026-04-05 03:22:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:06.061261 | orchestrator | 2026-04-05 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:09.113776 | orchestrator | 2026-04-05 03:22:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:09.116017 | orchestrator | 2026-04-05 03:22:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:09.116184 | orchestrator | 2026-04-05 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:12.162356 | orchestrator | 2026-04-05 03:22:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:12.163164 | orchestrator | 2026-04-05 03:22:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:12.163243 | orchestrator | 2026-04-05 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:15.216060 | orchestrator | 2026-04-05 03:22:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:15.218003 | orchestrator | 2026-04-05 03:22:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:15.218159 | orchestrator | 2026-04-05 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:18.274394 | orchestrator | 2026-04-05 03:22:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:18.275540 | orchestrator | 2026-04-05 03:22:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:18.275650 | orchestrator | 2026-04-05 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:21.320483 | orchestrator | 2026-04-05 03:22:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:21.322864 | orchestrator | 2026-04-05 03:22:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:21.322920 | orchestrator | 2026-04-05 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:24.374847 | orchestrator | 2026-04-05 03:22:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:24.376341 | orchestrator | 2026-04-05 03:22:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:24.376387 | orchestrator | 2026-04-05 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:27.428267 | orchestrator | 2026-04-05 03:22:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:27.430814 | orchestrator | 2026-04-05 03:22:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:27.430907 | orchestrator | 2026-04-05 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:30.464601 | orchestrator | 2026-04-05 03:22:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:30.464675 | orchestrator | 2026-04-05 03:22:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:30.464718 | orchestrator | 2026-04-05 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:33.519078 | orchestrator | 2026-04-05 03:22:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:33.520809 | orchestrator | 2026-04-05 03:22:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:33.520856 | orchestrator | 2026-04-05 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:36.572919 | orchestrator | 2026-04-05 03:22:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:36.574374 | orchestrator | 2026-04-05 03:22:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:36.574465 | orchestrator | 2026-04-05 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:39.626567 | orchestrator | 2026-04-05 03:22:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:39.630201 | orchestrator | 2026-04-05 03:22:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:39.630315 | orchestrator | 2026-04-05 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:42.673333 | orchestrator | 2026-04-05 03:22:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:42.674824 | orchestrator | 2026-04-05 03:22:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:42.674867 | orchestrator | 2026-04-05 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:45.715446 | orchestrator | 2026-04-05 03:22:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:45.717572 | orchestrator | 2026-04-05 03:22:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:45.717624 | orchestrator | 2026-04-05 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:48.761028 | orchestrator | 2026-04-05 03:22:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:48.762988 | orchestrator | 2026-04-05 03:22:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:48.763890 | orchestrator | 2026-04-05 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:51.809924 | orchestrator | 2026-04-05 03:22:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:51.811489 | orchestrator | 2026-04-05 03:22:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:51.811560 | orchestrator | 2026-04-05 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:54.858008 | orchestrator | 2026-04-05 03:22:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:54.860123 | orchestrator | 2026-04-05 03:22:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:54.860553 | orchestrator | 2026-04-05 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:22:57.912651 | orchestrator | 2026-04-05 03:22:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:22:57.914177 | orchestrator | 2026-04-05 03:22:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:22:57.914215 | orchestrator | 2026-04-05 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:00.965760 | orchestrator | 2026-04-05 03:23:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:00.969104 | orchestrator | 2026-04-05 03:23:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:00.969202 | orchestrator | 2026-04-05 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:04.025356 | orchestrator | 2026-04-05 03:23:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:04.027336 | orchestrator | 2026-04-05 03:23:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:04.027386 | orchestrator | 2026-04-05 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:07.075564 | orchestrator | 2026-04-05 03:23:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:07.078427 | orchestrator | 2026-04-05 03:23:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:07.078672 | orchestrator | 2026-04-05 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:10.126491 | orchestrator | 2026-04-05 03:23:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:10.126850 | orchestrator | 2026-04-05 03:23:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:10.127517 | orchestrator | 2026-04-05 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:13.173152 | orchestrator | 2026-04-05 03:23:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:13.175159 | orchestrator | 2026-04-05 03:23:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:13.175253 | orchestrator | 2026-04-05 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:16.226566 | orchestrator | 2026-04-05 03:23:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:16.228622 | orchestrator | 2026-04-05 03:23:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:16.228736 | orchestrator | 2026-04-05 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:19.284013 | orchestrator | 2026-04-05 03:23:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:19.286551 | orchestrator | 2026-04-05 03:23:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:19.286604 | orchestrator | 2026-04-05 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:22.331879 | orchestrator | 2026-04-05 03:23:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:22.335321 | orchestrator | 2026-04-05 03:23:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:22.335407 | orchestrator | 2026-04-05 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:25.373211 | orchestrator | 2026-04-05 03:23:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:25.375280 | orchestrator | 2026-04-05 03:23:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:25.375329 | orchestrator | 2026-04-05 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:28.426376 | orchestrator | 2026-04-05 03:23:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:28.427963 | orchestrator | 2026-04-05 03:23:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:28.428000 | orchestrator | 2026-04-05 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:31.470841 | orchestrator | 2026-04-05 03:23:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:31.472979 | orchestrator | 2026-04-05 03:23:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:31.473037 | orchestrator | 2026-04-05 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:34.524806 | orchestrator | 2026-04-05 03:23:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:34.527018 | orchestrator | 2026-04-05 03:23:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:34.527067 | orchestrator | 2026-04-05 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:37.571081 | orchestrator | 2026-04-05 03:23:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:37.572579 | orchestrator | 2026-04-05 03:23:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:37.572652 | orchestrator | 2026-04-05 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:40.631545 | orchestrator | 2026-04-05 03:23:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:40.633209 | orchestrator | 2026-04-05 03:23:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:40.633252 | orchestrator | 2026-04-05 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:43.682234 | orchestrator | 2026-04-05 03:23:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:43.682471 | orchestrator | 2026-04-05 03:23:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:43.683162 | orchestrator | 2026-04-05 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:46.731183 | orchestrator | 2026-04-05 03:23:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:46.733194 | orchestrator | 2026-04-05 03:23:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:46.733288 | orchestrator | 2026-04-05 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:49.780535 | orchestrator | 2026-04-05 03:23:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:49.782872 | orchestrator | 2026-04-05 03:23:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:49.782931 | orchestrator | 2026-04-05 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:52.831563 | orchestrator | 2026-04-05 03:23:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:52.832693 | orchestrator | 2026-04-05 03:23:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:52.832798 | orchestrator | 2026-04-05 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:55.878125 | orchestrator | 2026-04-05 03:23:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:55.880379 | orchestrator | 2026-04-05 03:23:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:55.880447 | orchestrator | 2026-04-05 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:23:58.930817 | orchestrator | 2026-04-05 03:23:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:23:58.931496 | orchestrator | 2026-04-05 03:23:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:23:58.931536 | orchestrator | 2026-04-05 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:01.984522 | orchestrator | 2026-04-05 03:24:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:01.986853 | orchestrator | 2026-04-05 03:24:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:01.987000 | orchestrator | 2026-04-05 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:05.054251 | orchestrator | 2026-04-05 03:24:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:05.056934 | orchestrator | 2026-04-05 03:24:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:05.056984 | orchestrator | 2026-04-05 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:08.105475 | orchestrator | 2026-04-05 03:24:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:08.108466 | orchestrator | 2026-04-05 03:24:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:08.108647 | orchestrator | 2026-04-05 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:11.160450 | orchestrator | 2026-04-05 03:24:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:11.162763 | orchestrator | 2026-04-05 03:24:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:11.162828 | orchestrator | 2026-04-05 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:14.209599 | orchestrator | 2026-04-05 03:24:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:14.211325 | orchestrator | 2026-04-05 03:24:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:14.211358 | orchestrator | 2026-04-05 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:17.268399 | orchestrator | 2026-04-05 03:24:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:17.270326 | orchestrator | 2026-04-05 03:24:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:17.270397 | orchestrator | 2026-04-05 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:20.317811 | orchestrator | 2026-04-05 03:24:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:20.319790 | orchestrator | 2026-04-05 03:24:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:20.319848 | orchestrator | 2026-04-05 03:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:23.368823 | orchestrator | 2026-04-05 03:24:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:23.370898 | orchestrator | 2026-04-05 03:24:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:23.370933 | orchestrator | 2026-04-05 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:26.425294 | orchestrator | 2026-04-05 03:24:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:26.427395 | orchestrator | 2026-04-05 03:24:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:26.427575 | orchestrator | 2026-04-05 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:29.475901 | orchestrator | 2026-04-05 03:24:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:29.476151 | orchestrator | 2026-04-05 03:24:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:29.476187 | orchestrator | 2026-04-05 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:32.514118 | orchestrator | 2026-04-05 03:24:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:32.515320 | orchestrator | 2026-04-05 03:24:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:32.515361 | orchestrator | 2026-04-05 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:35.568922 | orchestrator | 2026-04-05 03:24:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:35.570790 | orchestrator | 2026-04-05 03:24:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:35.570837 | orchestrator | 2026-04-05 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:38.624303 | orchestrator | 2026-04-05 03:24:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:38.627233 | orchestrator | 2026-04-05 03:24:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:38.627415 | orchestrator | 2026-04-05 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:41.668914 | orchestrator | 2026-04-05 03:24:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:41.672576 | orchestrator | 2026-04-05 03:24:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:41.672662 | orchestrator | 2026-04-05 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:44.715431 | orchestrator | 2026-04-05 03:24:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:44.717593 | orchestrator | 2026-04-05 03:24:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:44.717667 | orchestrator | 2026-04-05 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:47.769611 | orchestrator | 2026-04-05 03:24:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:47.771307 | orchestrator | 2026-04-05 03:24:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:47.771355 | orchestrator | 2026-04-05 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:50.821896 | orchestrator | 2026-04-05 03:24:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:50.823348 | orchestrator | 2026-04-05 03:24:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:50.823407 | orchestrator | 2026-04-05 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:53.866375 | orchestrator | 2026-04-05 03:24:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:53.868851 | orchestrator | 2026-04-05 03:24:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:53.868929 | orchestrator | 2026-04-05 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:56.919027 | orchestrator | 2026-04-05 03:24:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:56.920383 | orchestrator | 2026-04-05 03:24:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:56.920422 | orchestrator | 2026-04-05 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:24:59.968086 | orchestrator | 2026-04-05 03:24:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:24:59.968439 | orchestrator | 2026-04-05 03:24:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:24:59.968494 | orchestrator | 2026-04-05 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:03.017110 | orchestrator | 2026-04-05 03:25:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:03.017326 | orchestrator | 2026-04-05 03:25:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:03.017969 | orchestrator | 2026-04-05 03:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:06.060729 | orchestrator | 2026-04-05 03:25:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:06.061293 | orchestrator | 2026-04-05 03:25:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:06.061729 | orchestrator | 2026-04-05 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:09.112901 | orchestrator | 2026-04-05 03:25:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:09.116523 | orchestrator | 2026-04-05 03:25:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:09.116620 | orchestrator | 2026-04-05 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:12.172152 | orchestrator | 2026-04-05 03:25:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:12.174212 | orchestrator | 2026-04-05 03:25:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:12.174249 | orchestrator | 2026-04-05 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:15.224406 | orchestrator | 2026-04-05 03:25:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:15.225381 | orchestrator | 2026-04-05 03:25:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:15.225427 | orchestrator | 2026-04-05 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:18.280694 | orchestrator | 2026-04-05 03:25:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:18.281882 | orchestrator | 2026-04-05 03:25:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:18.281916 | orchestrator | 2026-04-05 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:21.332091 | orchestrator | 2026-04-05 03:25:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:21.336060 | orchestrator | 2026-04-05 03:25:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:21.336121 | orchestrator | 2026-04-05 03:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:24.382508 | orchestrator | 2026-04-05 03:25:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:24.386732 | orchestrator | 2026-04-05 03:25:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:24.386818 | orchestrator | 2026-04-05 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:27.439553 | orchestrator | 2026-04-05 03:25:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:27.443508 | orchestrator | 2026-04-05 03:25:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:27.443681 | orchestrator | 2026-04-05 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:30.489840 | orchestrator | 2026-04-05 03:25:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:30.490610 | orchestrator | 2026-04-05 03:25:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:30.490640 | orchestrator | 2026-04-05 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:33.542677 | orchestrator | 2026-04-05 03:25:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:33.544903 | orchestrator | 2026-04-05 03:25:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:33.544997 | orchestrator | 2026-04-05 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:36.590880 | orchestrator | 2026-04-05 03:25:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:36.592799 | orchestrator | 2026-04-05 03:25:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:36.593174 | orchestrator | 2026-04-05 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:39.646874 | orchestrator | 2026-04-05 03:25:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:39.647716 | orchestrator | 2026-04-05 03:25:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:39.647760 | orchestrator | 2026-04-05 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:42.714289 | orchestrator | 2026-04-05 03:25:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:42.716719 | orchestrator | 2026-04-05 03:25:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:42.716768 | orchestrator | 2026-04-05 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:45.769217 | orchestrator | 2026-04-05 03:25:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:45.772247 | orchestrator | 2026-04-05 03:25:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:45.772479 | orchestrator | 2026-04-05 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:48.834278 | orchestrator | 2026-04-05 03:25:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:48.836736 | orchestrator | 2026-04-05 03:25:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:48.836819 | orchestrator | 2026-04-05 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:51.885515 | orchestrator | 2026-04-05 03:25:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:51.888674 | orchestrator | 2026-04-05 03:25:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:51.888746 | orchestrator | 2026-04-05 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:54.934854 | orchestrator | 2026-04-05 03:25:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:54.936834 | orchestrator | 2026-04-05 03:25:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:54.936930 | orchestrator | 2026-04-05 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:25:57.981424 | orchestrator | 2026-04-05 03:25:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:25:57.982552 | orchestrator | 2026-04-05 03:25:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:25:57.982588 | orchestrator | 2026-04-05 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:26:01.037164 | orchestrator | 2026-04-05 03:26:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:26:01.039160 | orchestrator | 2026-04-05 03:26:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:26:01.039280 | orchestrator | 2026-04-05 03:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:26:04.096390 | orchestrator | 2026-04-05 03:26:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:26:04.098116 | orchestrator | 2026-04-05 03:26:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:26:04.098159 | orchestrator | 2026-04-05 03:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:26:07.137022 | orchestrator | 2026-04-05 03:26:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:26:07.137816 | orchestrator | 2026-04-05 03:26:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:26:07.137845 | orchestrator | 2026-04-05 03:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:26:10.191107 | orchestrator | 2026-04-05 03:26:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:26:10.191283 | orchestrator | 2026-04-05 03:26:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:26:10.191578 | orchestrator | 2026-04-05 03:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:26:13.240884 | orchestrator | 2026-04-05 03:26:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:13.345263 | orchestrator | 2026-04-05 03:28:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:13.345376 | orchestrator | 2026-04-05 03:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:16.388084 | orchestrator | 2026-04-05 03:28:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:16.391404 | orchestrator | 2026-04-05 03:28:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:16.391472 | orchestrator | 2026-04-05 03:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:19.437904 | orchestrator | 2026-04-05 03:28:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:19.440386 | orchestrator | 2026-04-05 03:28:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:19.440423 | orchestrator | 2026-04-05 03:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:22.489525 | orchestrator | 2026-04-05 03:28:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:22.490329 | orchestrator | 2026-04-05 03:28:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:22.490367 | orchestrator | 2026-04-05 03:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:25.530764 | orchestrator | 2026-04-05 03:28:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:25.533348 | orchestrator | 2026-04-05 03:28:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:25.533413 | orchestrator | 2026-04-05 03:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:28.580469 | orchestrator | 2026-04-05 03:28:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:28.584016 | orchestrator | 2026-04-05 03:28:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:28.584096 | orchestrator | 2026-04-05 03:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:31.627685 | orchestrator | 2026-04-05 03:28:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:31.630807 | orchestrator | 2026-04-05 03:28:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:31.630893 | orchestrator | 2026-04-05 03:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:34.679004 | orchestrator | 2026-04-05 03:28:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:34.681465 | orchestrator | 2026-04-05 03:28:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:34.681720 | orchestrator | 2026-04-05 03:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:37.732705 | orchestrator | 2026-04-05 03:28:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:37.733931 | orchestrator | 2026-04-05 03:28:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:37.734216 | orchestrator | 2026-04-05 03:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:40.778356 | orchestrator | 2026-04-05 03:28:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:40.779993 | orchestrator | 2026-04-05 03:28:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:40.780061 | orchestrator | 2026-04-05 03:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:43.827347 | orchestrator | 2026-04-05 03:28:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:43.832179 | orchestrator | 2026-04-05 03:28:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:43.832293 | orchestrator | 2026-04-05 03:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:46.879101 | orchestrator | 2026-04-05 03:28:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:46.881997 | orchestrator | 2026-04-05 03:28:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:46.882215 | orchestrator | 2026-04-05 03:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:49.928893 | orchestrator | 2026-04-05 03:28:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:49.931321 | orchestrator | 2026-04-05 03:28:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:49.931386 | orchestrator | 2026-04-05 03:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:52.980657 | orchestrator | 2026-04-05 03:28:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:52.982262 | orchestrator | 2026-04-05 03:28:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:52.982324 | orchestrator | 2026-04-05 03:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:56.027454 | orchestrator | 2026-04-05 03:28:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:56.028515 | orchestrator | 2026-04-05 03:28:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:56.028583 | orchestrator | 2026-04-05 03:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:28:59.077639 | orchestrator | 2026-04-05 03:28:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:28:59.081790 | orchestrator | 2026-04-05 03:28:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:28:59.081863 | orchestrator | 2026-04-05 03:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:02.127583 | orchestrator | 2026-04-05 03:29:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:02.129419 | orchestrator | 2026-04-05 03:29:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:02.129473 | orchestrator | 2026-04-05 03:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:05.169533 | orchestrator | 2026-04-05 03:29:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:05.170812 | orchestrator | 2026-04-05 03:29:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:05.170865 | orchestrator | 2026-04-05 03:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:08.208586 | orchestrator | 2026-04-05 03:29:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:08.210046 | orchestrator | 2026-04-05 03:29:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:08.210079 | orchestrator | 2026-04-05 03:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:11.255803 | orchestrator | 2026-04-05 03:29:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:11.257654 | orchestrator | 2026-04-05 03:29:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:11.257716 | orchestrator | 2026-04-05 03:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:14.304665 | orchestrator | 2026-04-05 03:29:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:14.307674 | orchestrator | 2026-04-05 03:29:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:14.307722 | orchestrator | 2026-04-05 03:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:17.351859 | orchestrator | 2026-04-05 03:29:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:17.352560 | orchestrator | 2026-04-05 03:29:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:17.352605 | orchestrator | 2026-04-05 03:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:20.399449 | orchestrator | 2026-04-05 03:29:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:20.401369 | orchestrator | 2026-04-05 03:29:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:20.401408 | orchestrator | 2026-04-05 03:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:23.442561 | orchestrator | 2026-04-05 03:29:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:23.446227 | orchestrator | 2026-04-05 03:29:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:23.446290 | orchestrator | 2026-04-05 03:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:26.489724 | orchestrator | 2026-04-05 03:29:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:26.491535 | orchestrator | 2026-04-05 03:29:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:26.491718 | orchestrator | 2026-04-05 03:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:29.536942 | orchestrator | 2026-04-05 03:29:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:29.541526 | orchestrator | 2026-04-05 03:29:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:29.541632 | orchestrator | 2026-04-05 03:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:32.584911 | orchestrator | 2026-04-05 03:29:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:32.587421 | orchestrator | 2026-04-05 03:29:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:32.587484 | orchestrator | 2026-04-05 03:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:35.630463 | orchestrator | 2026-04-05 03:29:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:35.632453 | orchestrator | 2026-04-05 03:29:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:35.632558 | orchestrator | 2026-04-05 03:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:38.672313 | orchestrator | 2026-04-05 03:29:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:38.673962 | orchestrator | 2026-04-05 03:29:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:38.674079 | orchestrator | 2026-04-05 03:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:41.721830 | orchestrator | 2026-04-05 03:29:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:41.723704 | orchestrator | 2026-04-05 03:29:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:41.724162 | orchestrator | 2026-04-05 03:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:44.767595 | orchestrator | 2026-04-05 03:29:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:44.769567 | orchestrator | 2026-04-05 03:29:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:44.769625 | orchestrator | 2026-04-05 03:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:47.816506 | orchestrator | 2026-04-05 03:29:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:47.819776 | orchestrator | 2026-04-05 03:29:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:47.819861 | orchestrator | 2026-04-05 03:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:50.857264 | orchestrator | 2026-04-05 03:29:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:50.860139 | orchestrator | 2026-04-05 03:29:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:50.860210 | orchestrator | 2026-04-05 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:53.907866 | orchestrator | 2026-04-05 03:29:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:53.909543 | orchestrator | 2026-04-05 03:29:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:53.909585 | orchestrator | 2026-04-05 03:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:56.952891 | orchestrator | 2026-04-05 03:29:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:29:56.954527 | orchestrator | 2026-04-05 03:29:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:29:56.954695 | orchestrator | 2026-04-05 03:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:29:59.998737 | orchestrator | 2026-04-05 03:29:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:00.000693 | orchestrator | 2026-04-05 03:30:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:00.000760 | orchestrator | 2026-04-05 03:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:03.040001 | orchestrator | 2026-04-05 03:30:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:03.040167 | orchestrator | 2026-04-05 03:30:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:03.040189 | orchestrator | 2026-04-05 03:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:06.077446 | orchestrator | 2026-04-05 03:30:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:06.080211 | orchestrator | 2026-04-05 03:30:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:06.080264 | orchestrator | 2026-04-05 03:30:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:09.120807 | orchestrator | 2026-04-05 03:30:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:09.121269 | orchestrator | 2026-04-05 03:30:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:09.121308 | orchestrator | 2026-04-05 03:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:12.160248 | orchestrator | 2026-04-05 03:30:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:12.163309 | orchestrator | 2026-04-05 03:30:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:12.163418 | orchestrator | 2026-04-05 03:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:15.196526 | orchestrator | 2026-04-05 03:30:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:15.197130 | orchestrator | 2026-04-05 03:30:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:15.197157 | orchestrator | 2026-04-05 03:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:18.237846 | orchestrator | 2026-04-05 03:30:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:18.239877 | orchestrator | 2026-04-05 03:30:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:18.239956 | orchestrator | 2026-04-05 03:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:21.281471 | orchestrator | 2026-04-05 03:30:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:21.283851 | orchestrator | 2026-04-05 03:30:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:21.283895 | orchestrator | 2026-04-05 03:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:24.332380 | orchestrator | 2026-04-05 03:30:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:24.335079 | orchestrator | 2026-04-05 03:30:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:24.335260 | orchestrator | 2026-04-05 03:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:27.387911 | orchestrator | 2026-04-05 03:30:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:27.390176 | orchestrator | 2026-04-05 03:30:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:27.390232 | orchestrator | 2026-04-05 03:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:30.435203 | orchestrator | 2026-04-05 03:30:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:30.436532 | orchestrator | 2026-04-05 03:30:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:30.436603 | orchestrator | 2026-04-05 03:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:33.476546 | orchestrator | 2026-04-05 03:30:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:33.479706 | orchestrator | 2026-04-05 03:30:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:33.479823 | orchestrator | 2026-04-05 03:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:36.512314 | orchestrator | 2026-04-05 03:30:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:36.513256 | orchestrator | 2026-04-05 03:30:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:36.513364 | orchestrator | 2026-04-05 03:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:39.546785 | orchestrator | 2026-04-05 03:30:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:39.547176 | orchestrator | 2026-04-05 03:30:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:39.547237 | orchestrator | 2026-04-05 03:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:42.593941 | orchestrator | 2026-04-05 03:30:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:42.594189 | orchestrator | 2026-04-05 03:30:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:42.594208 | orchestrator | 2026-04-05 03:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:45.640034 | orchestrator | 2026-04-05 03:30:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:45.642235 | orchestrator | 2026-04-05 03:30:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:45.642393 | orchestrator | 2026-04-05 03:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:48.685407 | orchestrator | 2026-04-05 03:30:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:48.686700 | orchestrator | 2026-04-05 03:30:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:48.686746 | orchestrator | 2026-04-05 03:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:51.723742 | orchestrator | 2026-04-05 03:30:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:51.723840 | orchestrator | 2026-04-05 03:30:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:51.723856 | orchestrator | 2026-04-05 03:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:54.763298 | orchestrator | 2026-04-05 03:30:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:54.764302 | orchestrator | 2026-04-05 03:30:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:54.764345 | orchestrator | 2026-04-05 03:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:30:57.809445 | orchestrator | 2026-04-05 03:30:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:30:57.809555 | orchestrator | 2026-04-05 03:30:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:30:57.809578 | orchestrator | 2026-04-05 03:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:00.845147 | orchestrator | 2026-04-05 03:31:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:00.847014 | orchestrator | 2026-04-05 03:31:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:00.847121 | orchestrator | 2026-04-05 03:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:03.893406 | orchestrator | 2026-04-05 03:31:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:03.895503 | orchestrator | 2026-04-05 03:31:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:03.895591 | orchestrator | 2026-04-05 03:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:06.941327 | orchestrator | 2026-04-05 03:31:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:06.942355 | orchestrator | 2026-04-05 03:31:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:06.942404 | orchestrator | 2026-04-05 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:09.975379 | orchestrator | 2026-04-05 03:31:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:09.976597 | orchestrator | 2026-04-05 03:31:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:09.976672 | orchestrator | 2026-04-05 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:13.023819 | orchestrator | 2026-04-05 03:31:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:13.026380 | orchestrator | 2026-04-05 03:31:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:13.026854 | orchestrator | 2026-04-05 03:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:16.069814 | orchestrator | 2026-04-05 03:31:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:16.071593 | orchestrator | 2026-04-05 03:31:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:16.071685 | orchestrator | 2026-04-05 03:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:19.121117 | orchestrator | 2026-04-05 03:31:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:19.123200 | orchestrator | 2026-04-05 03:31:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:19.123251 | orchestrator | 2026-04-05 03:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:22.169834 | orchestrator | 2026-04-05 03:31:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:22.171152 | orchestrator | 2026-04-05 03:31:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:22.171190 | orchestrator | 2026-04-05 03:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:25.213310 | orchestrator | 2026-04-05 03:31:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:25.214848 | orchestrator | 2026-04-05 03:31:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:25.214916 | orchestrator | 2026-04-05 03:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:28.262239 | orchestrator | 2026-04-05 03:31:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:28.263514 | orchestrator | 2026-04-05 03:31:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:28.263550 | orchestrator | 2026-04-05 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:31.307192 | orchestrator | 2026-04-05 03:31:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:31.308610 | orchestrator | 2026-04-05 03:31:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:31.308648 | orchestrator | 2026-04-05 03:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:34.345329 | orchestrator | 2026-04-05 03:31:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:34.346299 | orchestrator | 2026-04-05 03:31:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:34.346450 | orchestrator | 2026-04-05 03:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:37.386442 | orchestrator | 2026-04-05 03:31:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:37.388795 | orchestrator | 2026-04-05 03:31:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:37.388850 | orchestrator | 2026-04-05 03:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:40.431539 | orchestrator | 2026-04-05 03:31:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:40.433143 | orchestrator | 2026-04-05 03:31:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:40.433247 | orchestrator | 2026-04-05 03:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:43.474717 | orchestrator | 2026-04-05 03:31:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:43.475065 | orchestrator | 2026-04-05 03:31:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:43.475092 | orchestrator | 2026-04-05 03:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:46.519569 | orchestrator | 2026-04-05 03:31:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:46.519823 | orchestrator | 2026-04-05 03:31:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:46.519836 | orchestrator | 2026-04-05 03:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:49.563215 | orchestrator | 2026-04-05 03:31:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:49.564093 | orchestrator | 2026-04-05 03:31:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:49.564230 | orchestrator | 2026-04-05 03:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:52.608702 | orchestrator | 2026-04-05 03:31:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:52.609707 | orchestrator | 2026-04-05 03:31:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:52.609920 | orchestrator | 2026-04-05 03:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:55.659706 | orchestrator | 2026-04-05 03:31:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:55.661319 | orchestrator | 2026-04-05 03:31:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:55.661419 | orchestrator | 2026-04-05 03:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:31:58.705027 | orchestrator | 2026-04-05 03:31:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:31:58.705244 | orchestrator | 2026-04-05 03:31:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:31:58.705268 | orchestrator | 2026-04-05 03:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:01.751157 | orchestrator | 2026-04-05 03:32:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:01.751811 | orchestrator | 2026-04-05 03:32:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:01.751838 | orchestrator | 2026-04-05 03:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:04.793490 | orchestrator | 2026-04-05 03:32:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:04.793722 | orchestrator | 2026-04-05 03:32:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:04.793802 | orchestrator | 2026-04-05 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:07.844066 | orchestrator | 2026-04-05 03:32:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:07.846498 | orchestrator | 2026-04-05 03:32:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:07.846570 | orchestrator | 2026-04-05 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:10.895709 | orchestrator | 2026-04-05 03:32:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:10.895858 | orchestrator | 2026-04-05 03:32:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:10.896042 | orchestrator | 2026-04-05 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:13.943790 | orchestrator | 2026-04-05 03:32:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:13.944568 | orchestrator | 2026-04-05 03:32:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:13.944653 | orchestrator | 2026-04-05 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:16.996434 | orchestrator | 2026-04-05 03:32:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:16.999109 | orchestrator | 2026-04-05 03:32:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:16.999170 | orchestrator | 2026-04-05 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:20.054364 | orchestrator | 2026-04-05 03:32:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:20.055538 | orchestrator | 2026-04-05 03:32:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:20.055595 | orchestrator | 2026-04-05 03:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:23.106454 | orchestrator | 2026-04-05 03:32:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:23.111597 | orchestrator | 2026-04-05 03:32:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:23.111956 | orchestrator | 2026-04-05 03:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:26.156465 | orchestrator | 2026-04-05 03:32:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:26.157001 | orchestrator | 2026-04-05 03:32:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:26.157082 | orchestrator | 2026-04-05 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:29.210463 | orchestrator | 2026-04-05 03:32:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:29.213246 | orchestrator | 2026-04-05 03:32:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:29.213310 | orchestrator | 2026-04-05 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:32.249974 | orchestrator | 2026-04-05 03:32:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:32.251582 | orchestrator | 2026-04-05 03:32:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:32.251647 | orchestrator | 2026-04-05 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:35.292068 | orchestrator | 2026-04-05 03:32:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:35.294192 | orchestrator | 2026-04-05 03:32:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:35.294244 | orchestrator | 2026-04-05 03:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:38.347567 | orchestrator | 2026-04-05 03:32:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:38.350325 | orchestrator | 2026-04-05 03:32:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:38.350413 | orchestrator | 2026-04-05 03:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:41.393328 | orchestrator | 2026-04-05 03:32:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:41.394950 | orchestrator | 2026-04-05 03:32:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:41.395090 | orchestrator | 2026-04-05 03:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:44.445393 | orchestrator | 2026-04-05 03:32:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:44.448190 | orchestrator | 2026-04-05 03:32:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:44.448297 | orchestrator | 2026-04-05 03:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:47.492425 | orchestrator | 2026-04-05 03:32:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:47.494317 | orchestrator | 2026-04-05 03:32:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:47.494394 | orchestrator | 2026-04-05 03:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:50.540320 | orchestrator | 2026-04-05 03:32:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:50.541653 | orchestrator | 2026-04-05 03:32:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:50.541723 | orchestrator | 2026-04-05 03:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:53.592171 | orchestrator | 2026-04-05 03:32:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:53.594381 | orchestrator | 2026-04-05 03:32:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:53.594475 | orchestrator | 2026-04-05 03:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:56.640034 | orchestrator | 2026-04-05 03:32:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:56.641952 | orchestrator | 2026-04-05 03:32:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:56.642082 | orchestrator | 2026-04-05 03:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:32:59.689156 | orchestrator | 2026-04-05 03:32:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:32:59.692207 | orchestrator | 2026-04-05 03:32:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:32:59.692312 | orchestrator | 2026-04-05 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:02.740497 | orchestrator | 2026-04-05 03:33:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:02.742649 | orchestrator | 2026-04-05 03:33:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:02.742704 | orchestrator | 2026-04-05 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:05.791433 | orchestrator | 2026-04-05 03:33:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:05.792994 | orchestrator | 2026-04-05 03:33:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:05.793035 | orchestrator | 2026-04-05 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:08.848866 | orchestrator | 2026-04-05 03:33:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:08.850688 | orchestrator | 2026-04-05 03:33:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:08.850780 | orchestrator | 2026-04-05 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:11.892534 | orchestrator | 2026-04-05 03:33:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:11.894558 | orchestrator | 2026-04-05 03:33:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:11.894637 | orchestrator | 2026-04-05 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:14.934894 | orchestrator | 2026-04-05 03:33:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:14.936729 | orchestrator | 2026-04-05 03:33:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:14.936853 | orchestrator | 2026-04-05 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:17.985029 | orchestrator | 2026-04-05 03:33:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:17.986241 | orchestrator | 2026-04-05 03:33:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:17.986282 | orchestrator | 2026-04-05 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:21.039106 | orchestrator | 2026-04-05 03:33:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:21.041230 | orchestrator | 2026-04-05 03:33:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:21.041305 | orchestrator | 2026-04-05 03:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:24.088986 | orchestrator | 2026-04-05 03:33:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:24.090491 | orchestrator | 2026-04-05 03:33:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:24.090530 | orchestrator | 2026-04-05 03:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:27.136959 | orchestrator | 2026-04-05 03:33:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:27.138549 | orchestrator | 2026-04-05 03:33:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:27.138676 | orchestrator | 2026-04-05 03:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:30.190003 | orchestrator | 2026-04-05 03:33:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:30.192564 | orchestrator | 2026-04-05 03:33:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:30.192620 | orchestrator | 2026-04-05 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:33.235331 | orchestrator | 2026-04-05 03:33:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:33.236480 | orchestrator | 2026-04-05 03:33:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:33.236531 | orchestrator | 2026-04-05 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:36.284336 | orchestrator | 2026-04-05 03:33:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:36.287175 | orchestrator | 2026-04-05 03:33:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:36.287290 | orchestrator | 2026-04-05 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:39.336727 | orchestrator | 2026-04-05 03:33:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:39.338522 | orchestrator | 2026-04-05 03:33:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:39.338632 | orchestrator | 2026-04-05 03:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:42.383874 | orchestrator | 2026-04-05 03:33:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:42.385852 | orchestrator | 2026-04-05 03:33:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:42.386185 | orchestrator | 2026-04-05 03:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:45.438275 | orchestrator | 2026-04-05 03:33:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:45.439606 | orchestrator | 2026-04-05 03:33:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:45.439675 | orchestrator | 2026-04-05 03:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:48.491408 | orchestrator | 2026-04-05 03:33:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:48.493961 | orchestrator | 2026-04-05 03:33:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:48.494110 | orchestrator | 2026-04-05 03:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:51.537160 | orchestrator | 2026-04-05 03:33:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:51.540244 | orchestrator | 2026-04-05 03:33:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:51.540307 | orchestrator | 2026-04-05 03:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:54.585404 | orchestrator | 2026-04-05 03:33:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:54.587811 | orchestrator | 2026-04-05 03:33:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:54.587859 | orchestrator | 2026-04-05 03:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:33:57.630254 | orchestrator | 2026-04-05 03:33:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:33:57.633226 | orchestrator | 2026-04-05 03:33:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:33:57.633271 | orchestrator | 2026-04-05 03:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:00.675914 | orchestrator | 2026-04-05 03:34:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:00.676700 | orchestrator | 2026-04-05 03:34:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:00.676736 | orchestrator | 2026-04-05 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:03.718506 | orchestrator | 2026-04-05 03:34:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:03.720878 | orchestrator | 2026-04-05 03:34:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:03.721348 | orchestrator | 2026-04-05 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:06.757597 | orchestrator | 2026-04-05 03:34:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:06.760146 | orchestrator | 2026-04-05 03:34:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:06.760207 | orchestrator | 2026-04-05 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:09.808385 | orchestrator | 2026-04-05 03:34:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:09.811340 | orchestrator | 2026-04-05 03:34:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:09.811405 | orchestrator | 2026-04-05 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:12.853092 | orchestrator | 2026-04-05 03:34:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:12.854863 | orchestrator | 2026-04-05 03:34:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:12.854895 | orchestrator | 2026-04-05 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:15.909400 | orchestrator | 2026-04-05 03:34:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:15.912144 | orchestrator | 2026-04-05 03:34:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:15.912209 | orchestrator | 2026-04-05 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:18.964837 | orchestrator | 2026-04-05 03:34:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:18.967011 | orchestrator | 2026-04-05 03:34:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:18.967083 | orchestrator | 2026-04-05 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:22.022446 | orchestrator | 2026-04-05 03:34:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:22.025134 | orchestrator | 2026-04-05 03:34:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:22.025230 | orchestrator | 2026-04-05 03:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:25.069658 | orchestrator | 2026-04-05 03:34:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:25.069902 | orchestrator | 2026-04-05 03:34:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:25.069918 | orchestrator | 2026-04-05 03:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:28.118527 | orchestrator | 2026-04-05 03:34:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:28.123054 | orchestrator | 2026-04-05 03:34:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:28.123196 | orchestrator | 2026-04-05 03:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:31.166321 | orchestrator | 2026-04-05 03:34:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:31.167855 | orchestrator | 2026-04-05 03:34:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:31.167902 | orchestrator | 2026-04-05 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:34.209022 | orchestrator | 2026-04-05 03:34:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:34.211536 | orchestrator | 2026-04-05 03:34:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:34.211598 | orchestrator | 2026-04-05 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:37.262709 | orchestrator | 2026-04-05 03:34:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:37.264206 | orchestrator | 2026-04-05 03:34:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:37.264279 | orchestrator | 2026-04-05 03:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:40.304093 | orchestrator | 2026-04-05 03:34:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:40.305970 | orchestrator | 2026-04-05 03:34:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:40.306049 | orchestrator | 2026-04-05 03:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:43.358410 | orchestrator | 2026-04-05 03:34:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:43.360643 | orchestrator | 2026-04-05 03:34:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:43.360934 | orchestrator | 2026-04-05 03:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:46.405633 | orchestrator | 2026-04-05 03:34:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:46.407681 | orchestrator | 2026-04-05 03:34:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:46.407787 | orchestrator | 2026-04-05 03:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:49.450118 | orchestrator | 2026-04-05 03:34:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:49.451094 | orchestrator | 2026-04-05 03:34:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:49.451140 | orchestrator | 2026-04-05 03:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:52.502448 | orchestrator | 2026-04-05 03:34:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:52.505095 | orchestrator | 2026-04-05 03:34:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:52.505190 | orchestrator | 2026-04-05 03:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:55.548992 | orchestrator | 2026-04-05 03:34:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:55.550901 | orchestrator | 2026-04-05 03:34:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:55.550963 | orchestrator | 2026-04-05 03:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:34:58.601798 | orchestrator | 2026-04-05 03:34:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:34:58.603354 | orchestrator | 2026-04-05 03:34:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:34:58.603409 | orchestrator | 2026-04-05 03:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:01.644490 | orchestrator | 2026-04-05 03:35:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:01.645335 | orchestrator | 2026-04-05 03:35:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:01.645371 | orchestrator | 2026-04-05 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:04.688808 | orchestrator | 2026-04-05 03:35:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:04.690384 | orchestrator | 2026-04-05 03:35:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:04.690493 | orchestrator | 2026-04-05 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:07.730481 | orchestrator | 2026-04-05 03:35:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:07.732390 | orchestrator | 2026-04-05 03:35:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:07.732479 | orchestrator | 2026-04-05 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:10.781501 | orchestrator | 2026-04-05 03:35:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:10.783321 | orchestrator | 2026-04-05 03:35:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:10.783389 | orchestrator | 2026-04-05 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:13.838518 | orchestrator | 2026-04-05 03:35:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:13.841678 | orchestrator | 2026-04-05 03:35:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:13.841901 | orchestrator | 2026-04-05 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:16.891769 | orchestrator | 2026-04-05 03:35:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:16.894583 | orchestrator | 2026-04-05 03:35:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:16.894660 | orchestrator | 2026-04-05 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:19.937998 | orchestrator | 2026-04-05 03:35:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:19.940058 | orchestrator | 2026-04-05 03:35:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:19.940112 | orchestrator | 2026-04-05 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:22.982765 | orchestrator | 2026-04-05 03:35:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:22.984409 | orchestrator | 2026-04-05 03:35:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:22.984467 | orchestrator | 2026-04-05 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:26.028221 | orchestrator | 2026-04-05 03:35:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:26.030311 | orchestrator | 2026-04-05 03:35:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:26.030362 | orchestrator | 2026-04-05 03:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:29.079479 | orchestrator | 2026-04-05 03:35:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:29.082509 | orchestrator | 2026-04-05 03:35:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:29.082575 | orchestrator | 2026-04-05 03:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:32.127034 | orchestrator | 2026-04-05 03:35:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:32.129405 | orchestrator | 2026-04-05 03:35:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:32.129517 | orchestrator | 2026-04-05 03:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:35.169158 | orchestrator | 2026-04-05 03:35:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:35.171110 | orchestrator | 2026-04-05 03:35:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:35.171210 | orchestrator | 2026-04-05 03:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:38.211986 | orchestrator | 2026-04-05 03:35:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:38.213741 | orchestrator | 2026-04-05 03:35:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:38.214146 | orchestrator | 2026-04-05 03:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:41.263950 | orchestrator | 2026-04-05 03:35:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:41.265380 | orchestrator | 2026-04-05 03:35:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:41.265437 | orchestrator | 2026-04-05 03:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:44.311626 | orchestrator | 2026-04-05 03:35:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:44.313391 | orchestrator | 2026-04-05 03:35:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:44.313488 | orchestrator | 2026-04-05 03:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:47.360140 | orchestrator | 2026-04-05 03:35:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:47.361086 | orchestrator | 2026-04-05 03:35:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:47.361229 | orchestrator | 2026-04-05 03:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:50.405720 | orchestrator | 2026-04-05 03:35:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:50.407123 | orchestrator | 2026-04-05 03:35:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:50.407179 | orchestrator | 2026-04-05 03:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:53.456094 | orchestrator | 2026-04-05 03:35:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:53.458800 | orchestrator | 2026-04-05 03:35:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:53.458877 | orchestrator | 2026-04-05 03:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:56.501382 | orchestrator | 2026-04-05 03:35:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:56.501813 | orchestrator | 2026-04-05 03:35:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:56.502491 | orchestrator | 2026-04-05 03:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:35:59.548463 | orchestrator | 2026-04-05 03:35:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:35:59.551399 | orchestrator | 2026-04-05 03:35:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:35:59.551572 | orchestrator | 2026-04-05 03:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:02.591426 | orchestrator | 2026-04-05 03:36:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:02.591841 | orchestrator | 2026-04-05 03:36:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:02.591956 | orchestrator | 2026-04-05 03:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:05.638503 | orchestrator | 2026-04-05 03:36:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:05.639559 | orchestrator | 2026-04-05 03:36:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:05.639588 | orchestrator | 2026-04-05 03:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:08.692289 | orchestrator | 2026-04-05 03:36:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:08.693877 | orchestrator | 2026-04-05 03:36:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:08.693918 | orchestrator | 2026-04-05 03:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:11.746638 | orchestrator | 2026-04-05 03:36:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:11.748043 | orchestrator | 2026-04-05 03:36:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:11.748076 | orchestrator | 2026-04-05 03:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:14.806802 | orchestrator | 2026-04-05 03:36:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:14.807928 | orchestrator | 2026-04-05 03:36:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:14.808007 | orchestrator | 2026-04-05 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:17.862609 | orchestrator | 2026-04-05 03:36:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:17.864843 | orchestrator | 2026-04-05 03:36:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:17.864883 | orchestrator | 2026-04-05 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:20.911522 | orchestrator | 2026-04-05 03:36:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:20.912260 | orchestrator | 2026-04-05 03:36:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:20.912295 | orchestrator | 2026-04-05 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:23.955875 | orchestrator | 2026-04-05 03:36:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:23.957075 | orchestrator | 2026-04-05 03:36:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:23.957119 | orchestrator | 2026-04-05 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:27.014395 | orchestrator | 2026-04-05 03:36:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:27.016065 | orchestrator | 2026-04-05 03:36:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:27.016115 | orchestrator | 2026-04-05 03:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:30.072415 | orchestrator | 2026-04-05 03:36:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:30.073714 | orchestrator | 2026-04-05 03:36:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:30.073761 | orchestrator | 2026-04-05 03:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:33.122280 | orchestrator | 2026-04-05 03:36:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:33.123135 | orchestrator | 2026-04-05 03:36:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:33.123173 | orchestrator | 2026-04-05 03:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:36.175461 | orchestrator | 2026-04-05 03:36:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:36.178734 | orchestrator | 2026-04-05 03:36:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:36.178842 | orchestrator | 2026-04-05 03:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:39.230184 | orchestrator | 2026-04-05 03:36:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:39.232030 | orchestrator | 2026-04-05 03:36:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:39.232159 | orchestrator | 2026-04-05 03:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:42.288240 | orchestrator | 2026-04-05 03:36:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:42.290603 | orchestrator | 2026-04-05 03:36:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:42.290729 | orchestrator | 2026-04-05 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:45.342262 | orchestrator | 2026-04-05 03:36:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:45.343306 | orchestrator | 2026-04-05 03:36:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:45.343389 | orchestrator | 2026-04-05 03:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:48.396042 | orchestrator | 2026-04-05 03:36:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:48.398069 | orchestrator | 2026-04-05 03:36:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:48.398131 | orchestrator | 2026-04-05 03:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:51.446949 | orchestrator | 2026-04-05 03:36:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:51.451158 | orchestrator | 2026-04-05 03:36:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:51.451260 | orchestrator | 2026-04-05 03:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:54.491823 | orchestrator | 2026-04-05 03:36:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:54.493407 | orchestrator | 2026-04-05 03:36:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:54.493508 | orchestrator | 2026-04-05 03:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:36:57.534794 | orchestrator | 2026-04-05 03:36:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:36:57.537214 | orchestrator | 2026-04-05 03:36:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:36:57.537279 | orchestrator | 2026-04-05 03:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:00.592292 | orchestrator | 2026-04-05 03:37:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:00.593641 | orchestrator | 2026-04-05 03:37:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:00.593689 | orchestrator | 2026-04-05 03:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:03.647184 | orchestrator | 2026-04-05 03:37:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:03.649428 | orchestrator | 2026-04-05 03:37:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:03.649498 | orchestrator | 2026-04-05 03:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:06.704307 | orchestrator | 2026-04-05 03:37:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:06.706408 | orchestrator | 2026-04-05 03:37:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:06.706564 | orchestrator | 2026-04-05 03:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:09.768794 | orchestrator | 2026-04-05 03:37:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:09.770526 | orchestrator | 2026-04-05 03:37:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:09.770581 | orchestrator | 2026-04-05 03:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:12.816731 | orchestrator | 2026-04-05 03:37:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:12.818267 | orchestrator | 2026-04-05 03:37:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:12.818321 | orchestrator | 2026-04-05 03:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:15.875455 | orchestrator | 2026-04-05 03:37:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:15.876979 | orchestrator | 2026-04-05 03:37:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:15.877025 | orchestrator | 2026-04-05 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:18.925706 | orchestrator | 2026-04-05 03:37:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:18.928832 | orchestrator | 2026-04-05 03:37:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:18.928911 | orchestrator | 2026-04-05 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:21.977287 | orchestrator | 2026-04-05 03:37:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:21.979465 | orchestrator | 2026-04-05 03:37:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:21.979545 | orchestrator | 2026-04-05 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:25.031498 | orchestrator | 2026-04-05 03:37:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:25.033861 | orchestrator | 2026-04-05 03:37:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:25.033931 | orchestrator | 2026-04-05 03:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:28.083155 | orchestrator | 2026-04-05 03:37:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:28.084852 | orchestrator | 2026-04-05 03:37:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:28.084898 | orchestrator | 2026-04-05 03:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:31.132115 | orchestrator | 2026-04-05 03:37:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:31.134497 | orchestrator | 2026-04-05 03:37:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:31.134641 | orchestrator | 2026-04-05 03:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:34.178844 | orchestrator | 2026-04-05 03:37:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:34.180410 | orchestrator | 2026-04-05 03:37:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:34.180450 | orchestrator | 2026-04-05 03:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:37.228059 | orchestrator | 2026-04-05 03:37:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:37.229276 | orchestrator | 2026-04-05 03:37:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:37.229331 | orchestrator | 2026-04-05 03:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:40.283880 | orchestrator | 2026-04-05 03:37:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:40.287431 | orchestrator | 2026-04-05 03:37:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:40.287514 | orchestrator | 2026-04-05 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:43.343097 | orchestrator | 2026-04-05 03:37:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:43.346433 | orchestrator | 2026-04-05 03:37:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:43.346535 | orchestrator | 2026-04-05 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:46.396507 | orchestrator | 2026-04-05 03:37:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:46.398978 | orchestrator | 2026-04-05 03:37:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:46.399035 | orchestrator | 2026-04-05 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:49.451239 | orchestrator | 2026-04-05 03:37:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:49.453012 | orchestrator | 2026-04-05 03:37:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:49.453093 | orchestrator | 2026-04-05 03:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:52.504461 | orchestrator | 2026-04-05 03:37:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:52.506107 | orchestrator | 2026-04-05 03:37:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:52.506165 | orchestrator | 2026-04-05 03:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:55.553855 | orchestrator | 2026-04-05 03:37:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:55.555780 | orchestrator | 2026-04-05 03:37:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:55.555990 | orchestrator | 2026-04-05 03:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:37:58.609961 | orchestrator | 2026-04-05 03:37:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:37:58.611846 | orchestrator | 2026-04-05 03:37:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:37:58.611898 | orchestrator | 2026-04-05 03:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:01.664290 | orchestrator | 2026-04-05 03:38:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:01.665838 | orchestrator | 2026-04-05 03:38:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:01.666303 | orchestrator | 2026-04-05 03:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:04.718274 | orchestrator | 2026-04-05 03:38:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:04.720833 | orchestrator | 2026-04-05 03:38:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:04.720940 | orchestrator | 2026-04-05 03:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:07.769295 | orchestrator | 2026-04-05 03:38:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:07.771608 | orchestrator | 2026-04-05 03:38:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:07.771666 | orchestrator | 2026-04-05 03:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:10.818721 | orchestrator | 2026-04-05 03:38:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:10.821692 | orchestrator | 2026-04-05 03:38:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:10.821766 | orchestrator | 2026-04-05 03:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:13.873711 | orchestrator | 2026-04-05 03:38:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:13.875392 | orchestrator | 2026-04-05 03:38:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:13.875618 | orchestrator | 2026-04-05 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:16.921284 | orchestrator | 2026-04-05 03:38:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:16.923061 | orchestrator | 2026-04-05 03:38:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:16.923132 | orchestrator | 2026-04-05 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:19.982942 | orchestrator | 2026-04-05 03:38:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:19.985485 | orchestrator | 2026-04-05 03:38:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:19.985574 | orchestrator | 2026-04-05 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:23.029597 | orchestrator | 2026-04-05 03:38:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:23.034249 | orchestrator | 2026-04-05 03:38:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:23.036003 | orchestrator | 2026-04-05 03:38:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:26.076125 | orchestrator | 2026-04-05 03:38:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:26.077457 | orchestrator | 2026-04-05 03:38:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:26.077526 | orchestrator | 2026-04-05 03:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:29.129821 | orchestrator | 2026-04-05 03:38:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:29.132891 | orchestrator | 2026-04-05 03:38:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:29.132979 | orchestrator | 2026-04-05 03:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:32.176432 | orchestrator | 2026-04-05 03:38:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:32.177888 | orchestrator | 2026-04-05 03:38:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:32.177979 | orchestrator | 2026-04-05 03:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:35.226676 | orchestrator | 2026-04-05 03:38:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:35.228084 | orchestrator | 2026-04-05 03:38:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:35.228266 | orchestrator | 2026-04-05 03:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:38.280757 | orchestrator | 2026-04-05 03:38:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:38.282257 | orchestrator | 2026-04-05 03:38:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:38.282314 | orchestrator | 2026-04-05 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:41.337823 | orchestrator | 2026-04-05 03:38:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:41.340062 | orchestrator | 2026-04-05 03:38:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:41.340135 | orchestrator | 2026-04-05 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:44.382930 | orchestrator | 2026-04-05 03:38:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:44.383034 | orchestrator | 2026-04-05 03:38:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:44.383117 | orchestrator | 2026-04-05 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:47.428618 | orchestrator | 2026-04-05 03:38:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:47.429020 | orchestrator | 2026-04-05 03:38:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:47.429047 | orchestrator | 2026-04-05 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:50.472936 | orchestrator | 2026-04-05 03:38:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:50.474386 | orchestrator | 2026-04-05 03:38:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:50.474436 | orchestrator | 2026-04-05 03:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:53.522623 | orchestrator | 2026-04-05 03:38:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:53.525565 | orchestrator | 2026-04-05 03:38:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:53.525599 | orchestrator | 2026-04-05 03:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:56.571659 | orchestrator | 2026-04-05 03:38:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:56.573928 | orchestrator | 2026-04-05 03:38:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:56.573989 | orchestrator | 2026-04-05 03:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:38:59.626995 | orchestrator | 2026-04-05 03:38:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:38:59.627370 | orchestrator | 2026-04-05 03:38:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:38:59.628305 | orchestrator | 2026-04-05 03:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:02.681695 | orchestrator | 2026-04-05 03:39:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:02.684114 | orchestrator | 2026-04-05 03:39:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:02.684159 | orchestrator | 2026-04-05 03:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:05.734241 | orchestrator | 2026-04-05 03:39:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:05.736659 | orchestrator | 2026-04-05 03:39:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:05.736699 | orchestrator | 2026-04-05 03:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:08.788229 | orchestrator | 2026-04-05 03:39:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:08.790576 | orchestrator | 2026-04-05 03:39:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:08.790652 | orchestrator | 2026-04-05 03:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:11.839492 | orchestrator | 2026-04-05 03:39:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:11.841166 | orchestrator | 2026-04-05 03:39:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:11.841206 | orchestrator | 2026-04-05 03:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:14.886143 | orchestrator | 2026-04-05 03:39:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:14.889267 | orchestrator | 2026-04-05 03:39:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:14.889357 | orchestrator | 2026-04-05 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:17.941912 | orchestrator | 2026-04-05 03:39:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:17.944615 | orchestrator | 2026-04-05 03:39:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:17.944685 | orchestrator | 2026-04-05 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:20.995932 | orchestrator | 2026-04-05 03:39:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:20.997406 | orchestrator | 2026-04-05 03:39:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:20.997478 | orchestrator | 2026-04-05 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:24.046635 | orchestrator | 2026-04-05 03:39:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:24.048398 | orchestrator | 2026-04-05 03:39:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:24.048584 | orchestrator | 2026-04-05 03:39:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:27.093329 | orchestrator | 2026-04-05 03:39:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:27.094518 | orchestrator | 2026-04-05 03:39:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:27.094549 | orchestrator | 2026-04-05 03:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:30.144586 | orchestrator | 2026-04-05 03:39:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:30.147564 | orchestrator | 2026-04-05 03:39:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:30.147613 | orchestrator | 2026-04-05 03:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:33.199253 | orchestrator | 2026-04-05 03:39:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:33.201293 | orchestrator | 2026-04-05 03:39:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:33.201525 | orchestrator | 2026-04-05 03:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:36.248793 | orchestrator | 2026-04-05 03:39:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:36.250703 | orchestrator | 2026-04-05 03:39:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:36.250741 | orchestrator | 2026-04-05 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:39.294130 | orchestrator | 2026-04-05 03:39:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:39.295669 | orchestrator | 2026-04-05 03:39:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:39.295711 | orchestrator | 2026-04-05 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:42.335654 | orchestrator | 2026-04-05 03:39:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:42.338175 | orchestrator | 2026-04-05 03:39:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:42.338211 | orchestrator | 2026-04-05 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:45.386733 | orchestrator | 2026-04-05 03:39:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:45.389065 | orchestrator | 2026-04-05 03:39:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:45.389266 | orchestrator | 2026-04-05 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:48.446309 | orchestrator | 2026-04-05 03:39:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:48.449079 | orchestrator | 2026-04-05 03:39:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:48.449157 | orchestrator | 2026-04-05 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:51.505830 | orchestrator | 2026-04-05 03:39:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:51.506933 | orchestrator | 2026-04-05 03:39:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:51.506990 | orchestrator | 2026-04-05 03:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:54.561217 | orchestrator | 2026-04-05 03:39:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:54.563678 | orchestrator | 2026-04-05 03:39:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:54.563788 | orchestrator | 2026-04-05 03:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:39:57.611254 | orchestrator | 2026-04-05 03:39:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:39:57.614588 | orchestrator | 2026-04-05 03:39:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:39:57.614643 | orchestrator | 2026-04-05 03:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:00.660571 | orchestrator | 2026-04-05 03:40:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:00.661657 | orchestrator | 2026-04-05 03:40:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:00.661690 | orchestrator | 2026-04-05 03:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:03.708972 | orchestrator | 2026-04-05 03:40:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:03.710766 | orchestrator | 2026-04-05 03:40:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:03.710840 | orchestrator | 2026-04-05 03:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:06.764857 | orchestrator | 2026-04-05 03:40:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:06.766265 | orchestrator | 2026-04-05 03:40:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:06.766330 | orchestrator | 2026-04-05 03:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:09.814572 | orchestrator | 2026-04-05 03:40:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:09.816429 | orchestrator | 2026-04-05 03:40:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:09.816613 | orchestrator | 2026-04-05 03:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:12.875887 | orchestrator | 2026-04-05 03:40:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:12.877365 | orchestrator | 2026-04-05 03:40:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:12.877408 | orchestrator | 2026-04-05 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:15.923109 | orchestrator | 2026-04-05 03:40:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:15.924172 | orchestrator | 2026-04-05 03:40:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:15.924221 | orchestrator | 2026-04-05 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:18.979655 | orchestrator | 2026-04-05 03:40:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:18.982229 | orchestrator | 2026-04-05 03:40:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:18.982270 | orchestrator | 2026-04-05 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:22.034363 | orchestrator | 2026-04-05 03:40:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:22.036253 | orchestrator | 2026-04-05 03:40:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:22.036287 | orchestrator | 2026-04-05 03:40:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:25.082142 | orchestrator | 2026-04-05 03:40:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:25.082691 | orchestrator | 2026-04-05 03:40:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:25.082757 | orchestrator | 2026-04-05 03:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:28.128009 | orchestrator | 2026-04-05 03:40:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:28.130236 | orchestrator | 2026-04-05 03:40:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:28.130285 | orchestrator | 2026-04-05 03:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:31.183776 | orchestrator | 2026-04-05 03:40:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:31.185814 | orchestrator | 2026-04-05 03:40:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:31.185869 | orchestrator | 2026-04-05 03:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:34.227829 | orchestrator | 2026-04-05 03:40:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:34.228776 | orchestrator | 2026-04-05 03:40:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:34.229019 | orchestrator | 2026-04-05 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:37.270121 | orchestrator | 2026-04-05 03:40:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:37.272024 | orchestrator | 2026-04-05 03:40:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:37.272166 | orchestrator | 2026-04-05 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:40.321598 | orchestrator | 2026-04-05 03:40:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:40.323778 | orchestrator | 2026-04-05 03:40:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:40.323875 | orchestrator | 2026-04-05 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:43.372177 | orchestrator | 2026-04-05 03:40:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:43.374584 | orchestrator | 2026-04-05 03:40:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:43.374641 | orchestrator | 2026-04-05 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:46.423947 | orchestrator | 2026-04-05 03:40:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:46.425252 | orchestrator | 2026-04-05 03:40:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:46.425295 | orchestrator | 2026-04-05 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:49.478971 | orchestrator | 2026-04-05 03:40:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:49.481861 | orchestrator | 2026-04-05 03:40:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:49.481942 | orchestrator | 2026-04-05 03:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:52.529173 | orchestrator | 2026-04-05 03:40:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:52.532596 | orchestrator | 2026-04-05 03:40:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:52.532676 | orchestrator | 2026-04-05 03:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:55.586714 | orchestrator | 2026-04-05 03:40:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:55.589040 | orchestrator | 2026-04-05 03:40:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:55.589087 | orchestrator | 2026-04-05 03:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:40:58.631936 | orchestrator | 2026-04-05 03:40:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:40:58.635346 | orchestrator | 2026-04-05 03:40:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:40:58.635470 | orchestrator | 2026-04-05 03:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:01.681012 | orchestrator | 2026-04-05 03:41:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:01.682465 | orchestrator | 2026-04-05 03:41:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:01.682575 | orchestrator | 2026-04-05 03:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:04.726370 | orchestrator | 2026-04-05 03:41:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:04.728785 | orchestrator | 2026-04-05 03:41:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:04.729022 | orchestrator | 2026-04-05 03:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:07.778680 | orchestrator | 2026-04-05 03:41:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:07.780899 | orchestrator | 2026-04-05 03:41:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:07.780949 | orchestrator | 2026-04-05 03:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:10.825667 | orchestrator | 2026-04-05 03:41:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:10.827692 | orchestrator | 2026-04-05 03:41:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:10.827738 | orchestrator | 2026-04-05 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:13.866993 | orchestrator | 2026-04-05 03:41:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:13.871147 | orchestrator | 2026-04-05 03:41:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:13.871244 | orchestrator | 2026-04-05 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:16.922277 | orchestrator | 2026-04-05 03:41:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:16.924681 | orchestrator | 2026-04-05 03:41:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:16.924730 | orchestrator | 2026-04-05 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:19.972661 | orchestrator | 2026-04-05 03:41:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:19.974093 | orchestrator | 2026-04-05 03:41:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:19.974219 | orchestrator | 2026-04-05 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:23.025930 | orchestrator | 2026-04-05 03:41:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:23.026063 | orchestrator | 2026-04-05 03:41:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:23.026076 | orchestrator | 2026-04-05 03:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:26.068875 | orchestrator | 2026-04-05 03:41:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:26.069983 | orchestrator | 2026-04-05 03:41:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:26.070056 | orchestrator | 2026-04-05 03:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:29.113627 | orchestrator | 2026-04-05 03:41:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:29.117957 | orchestrator | 2026-04-05 03:41:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:29.118067 | orchestrator | 2026-04-05 03:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:32.164081 | orchestrator | 2026-04-05 03:41:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:32.167227 | orchestrator | 2026-04-05 03:41:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:32.167271 | orchestrator | 2026-04-05 03:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:35.216935 | orchestrator | 2026-04-05 03:41:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:35.219205 | orchestrator | 2026-04-05 03:41:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:35.219259 | orchestrator | 2026-04-05 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:38.265676 | orchestrator | 2026-04-05 03:41:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:38.267736 | orchestrator | 2026-04-05 03:41:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:38.267841 | orchestrator | 2026-04-05 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:41.327438 | orchestrator | 2026-04-05 03:41:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:41.329635 | orchestrator | 2026-04-05 03:41:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:41.329693 | orchestrator | 2026-04-05 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:44.379178 | orchestrator | 2026-04-05 03:41:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:44.381610 | orchestrator | 2026-04-05 03:41:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:44.381680 | orchestrator | 2026-04-05 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:47.428166 | orchestrator | 2026-04-05 03:41:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:47.430010 | orchestrator | 2026-04-05 03:41:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:47.430155 | orchestrator | 2026-04-05 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:50.478984 | orchestrator | 2026-04-05 03:41:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:50.481654 | orchestrator | 2026-04-05 03:41:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:50.481738 | orchestrator | 2026-04-05 03:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:53.527485 | orchestrator | 2026-04-05 03:41:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:53.529675 | orchestrator | 2026-04-05 03:41:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:53.529733 | orchestrator | 2026-04-05 03:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:56.591424 | orchestrator | 2026-04-05 03:41:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:56.593043 | orchestrator | 2026-04-05 03:41:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:56.593085 | orchestrator | 2026-04-05 03:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:41:59.646301 | orchestrator | 2026-04-05 03:41:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:41:59.648875 | orchestrator | 2026-04-05 03:41:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:41:59.648959 | orchestrator | 2026-04-05 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:02.695897 | orchestrator | 2026-04-05 03:42:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:02.697851 | orchestrator | 2026-04-05 03:42:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:02.697921 | orchestrator | 2026-04-05 03:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:05.750325 | orchestrator | 2026-04-05 03:42:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:05.752903 | orchestrator | 2026-04-05 03:42:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:05.752970 | orchestrator | 2026-04-05 03:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:08.800529 | orchestrator | 2026-04-05 03:42:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:08.804317 | orchestrator | 2026-04-05 03:42:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:08.804386 | orchestrator | 2026-04-05 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:11.852635 | orchestrator | 2026-04-05 03:42:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:11.857070 | orchestrator | 2026-04-05 03:42:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:11.857144 | orchestrator | 2026-04-05 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:14.907603 | orchestrator | 2026-04-05 03:42:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:14.910609 | orchestrator | 2026-04-05 03:42:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:14.910660 | orchestrator | 2026-04-05 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:17.963553 | orchestrator | 2026-04-05 03:42:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:17.965907 | orchestrator | 2026-04-05 03:42:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:17.965980 | orchestrator | 2026-04-05 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:21.010898 | orchestrator | 2026-04-05 03:42:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:21.012162 | orchestrator | 2026-04-05 03:42:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:21.012198 | orchestrator | 2026-04-05 03:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:24.057777 | orchestrator | 2026-04-05 03:42:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:24.060492 | orchestrator | 2026-04-05 03:42:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:24.060558 | orchestrator | 2026-04-05 03:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:27.106555 | orchestrator | 2026-04-05 03:42:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:27.108490 | orchestrator | 2026-04-05 03:42:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:27.108546 | orchestrator | 2026-04-05 03:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:30.162469 | orchestrator | 2026-04-05 03:42:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:30.163681 | orchestrator | 2026-04-05 03:42:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:30.163774 | orchestrator | 2026-04-05 03:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:33.213907 | orchestrator | 2026-04-05 03:42:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:33.216654 | orchestrator | 2026-04-05 03:42:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:33.216754 | orchestrator | 2026-04-05 03:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:36.268781 | orchestrator | 2026-04-05 03:42:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:36.270468 | orchestrator | 2026-04-05 03:42:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:36.270536 | orchestrator | 2026-04-05 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:39.324767 | orchestrator | 2026-04-05 03:42:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:39.327138 | orchestrator | 2026-04-05 03:42:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:39.327233 | orchestrator | 2026-04-05 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:42.374140 | orchestrator | 2026-04-05 03:42:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:42.375819 | orchestrator | 2026-04-05 03:42:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:42.376205 | orchestrator | 2026-04-05 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:45.422804 | orchestrator | 2026-04-05 03:42:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:45.423724 | orchestrator | 2026-04-05 03:42:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:45.423767 | orchestrator | 2026-04-05 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:48.470469 | orchestrator | 2026-04-05 03:42:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:48.472155 | orchestrator | 2026-04-05 03:42:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:48.472221 | orchestrator | 2026-04-05 03:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:51.527237 | orchestrator | 2026-04-05 03:42:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:51.528038 | orchestrator | 2026-04-05 03:42:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:51.528074 | orchestrator | 2026-04-05 03:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:54.578546 | orchestrator | 2026-04-05 03:42:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:54.581680 | orchestrator | 2026-04-05 03:42:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:54.581797 | orchestrator | 2026-04-05 03:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:42:57.630652 | orchestrator | 2026-04-05 03:42:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:42:57.633505 | orchestrator | 2026-04-05 03:42:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:42:57.633587 | orchestrator | 2026-04-05 03:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:00.683185 | orchestrator | 2026-04-05 03:43:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:00.684534 | orchestrator | 2026-04-05 03:43:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:00.684586 | orchestrator | 2026-04-05 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:03.731047 | orchestrator | 2026-04-05 03:43:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:03.732159 | orchestrator | 2026-04-05 03:43:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:03.732268 | orchestrator | 2026-04-05 03:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:06.778976 | orchestrator | 2026-04-05 03:43:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:06.780901 | orchestrator | 2026-04-05 03:43:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:06.780961 | orchestrator | 2026-04-05 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:09.830367 | orchestrator | 2026-04-05 03:43:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:09.832954 | orchestrator | 2026-04-05 03:43:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:09.833034 | orchestrator | 2026-04-05 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:12.883213 | orchestrator | 2026-04-05 03:43:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:12.883805 | orchestrator | 2026-04-05 03:43:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:12.883842 | orchestrator | 2026-04-05 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:15.936950 | orchestrator | 2026-04-05 03:43:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:15.938747 | orchestrator | 2026-04-05 03:43:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:15.938809 | orchestrator | 2026-04-05 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:18.982275 | orchestrator | 2026-04-05 03:43:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:18.983607 | orchestrator | 2026-04-05 03:43:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:18.983644 | orchestrator | 2026-04-05 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:22.027750 | orchestrator | 2026-04-05 03:43:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:22.029587 | orchestrator | 2026-04-05 03:43:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:22.029636 | orchestrator | 2026-04-05 03:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:25.076854 | orchestrator | 2026-04-05 03:43:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:25.078783 | orchestrator | 2026-04-05 03:43:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:25.078855 | orchestrator | 2026-04-05 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:28.127466 | orchestrator | 2026-04-05 03:43:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:28.129873 | orchestrator | 2026-04-05 03:43:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:28.129927 | orchestrator | 2026-04-05 03:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:31.181453 | orchestrator | 2026-04-05 03:43:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:31.183424 | orchestrator | 2026-04-05 03:43:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:31.183477 | orchestrator | 2026-04-05 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:34.224977 | orchestrator | 2026-04-05 03:43:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:34.228221 | orchestrator | 2026-04-05 03:43:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:34.228367 | orchestrator | 2026-04-05 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:37.280666 | orchestrator | 2026-04-05 03:43:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:37.282603 | orchestrator | 2026-04-05 03:43:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:37.282751 | orchestrator | 2026-04-05 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:40.331095 | orchestrator | 2026-04-05 03:43:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:40.332559 | orchestrator | 2026-04-05 03:43:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:40.332603 | orchestrator | 2026-04-05 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:43.370683 | orchestrator | 2026-04-05 03:43:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:43.371718 | orchestrator | 2026-04-05 03:43:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:43.371779 | orchestrator | 2026-04-05 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:46.447737 | orchestrator | 2026-04-05 03:43:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:46.448423 | orchestrator | 2026-04-05 03:43:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:46.448445 | orchestrator | 2026-04-05 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:49.500857 | orchestrator | 2026-04-05 03:43:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:49.503124 | orchestrator | 2026-04-05 03:43:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:49.503176 | orchestrator | 2026-04-05 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:52.558827 | orchestrator | 2026-04-05 03:43:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:52.563439 | orchestrator | 2026-04-05 03:43:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:52.564869 | orchestrator | 2026-04-05 03:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:55.612612 | orchestrator | 2026-04-05 03:43:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:55.614021 | orchestrator | 2026-04-05 03:43:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:55.614253 | orchestrator | 2026-04-05 03:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:43:58.659693 | orchestrator | 2026-04-05 03:43:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:43:58.661051 | orchestrator | 2026-04-05 03:43:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:43:58.661104 | orchestrator | 2026-04-05 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:01.708189 | orchestrator | 2026-04-05 03:44:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:01.709601 | orchestrator | 2026-04-05 03:44:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:01.709665 | orchestrator | 2026-04-05 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:04.759333 | orchestrator | 2026-04-05 03:44:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:04.760931 | orchestrator | 2026-04-05 03:44:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:04.760982 | orchestrator | 2026-04-05 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:07.809835 | orchestrator | 2026-04-05 03:44:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:07.812633 | orchestrator | 2026-04-05 03:44:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:07.812679 | orchestrator | 2026-04-05 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:10.861804 | orchestrator | 2026-04-05 03:44:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:10.862743 | orchestrator | 2026-04-05 03:44:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:10.862828 | orchestrator | 2026-04-05 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:13.913617 | orchestrator | 2026-04-05 03:44:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:13.915793 | orchestrator | 2026-04-05 03:44:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:13.915832 | orchestrator | 2026-04-05 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:16.958916 | orchestrator | 2026-04-05 03:44:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:16.961238 | orchestrator | 2026-04-05 03:44:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:16.961370 | orchestrator | 2026-04-05 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:20.011779 | orchestrator | 2026-04-05 03:44:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:20.012605 | orchestrator | 2026-04-05 03:44:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:20.012653 | orchestrator | 2026-04-05 03:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:23.063232 | orchestrator | 2026-04-05 03:44:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:23.064578 | orchestrator | 2026-04-05 03:44:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:23.064909 | orchestrator | 2026-04-05 03:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:26.110384 | orchestrator | 2026-04-05 03:44:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:26.111409 | orchestrator | 2026-04-05 03:44:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:26.111454 | orchestrator | 2026-04-05 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:29.157141 | orchestrator | 2026-04-05 03:44:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:29.158813 | orchestrator | 2026-04-05 03:44:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:29.158928 | orchestrator | 2026-04-05 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:32.200115 | orchestrator | 2026-04-05 03:44:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:32.201987 | orchestrator | 2026-04-05 03:44:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:32.202140 | orchestrator | 2026-04-05 03:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:35.245867 | orchestrator | 2026-04-05 03:44:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:35.246927 | orchestrator | 2026-04-05 03:44:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:35.246956 | orchestrator | 2026-04-05 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:38.301424 | orchestrator | 2026-04-05 03:44:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:38.303598 | orchestrator | 2026-04-05 03:44:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:38.303661 | orchestrator | 2026-04-05 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:41.351657 | orchestrator | 2026-04-05 03:44:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:41.353761 | orchestrator | 2026-04-05 03:44:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:41.353819 | orchestrator | 2026-04-05 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:44.401092 | orchestrator | 2026-04-05 03:44:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:44.403542 | orchestrator | 2026-04-05 03:44:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:44.403613 | orchestrator | 2026-04-05 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:47.453453 | orchestrator | 2026-04-05 03:44:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:47.455684 | orchestrator | 2026-04-05 03:44:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:47.455776 | orchestrator | 2026-04-05 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:50.503677 | orchestrator | 2026-04-05 03:44:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:50.505221 | orchestrator | 2026-04-05 03:44:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:50.505320 | orchestrator | 2026-04-05 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:53.547526 | orchestrator | 2026-04-05 03:44:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:53.549766 | orchestrator | 2026-04-05 03:44:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:53.549823 | orchestrator | 2026-04-05 03:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:56.596189 | orchestrator | 2026-04-05 03:44:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:56.597348 | orchestrator | 2026-04-05 03:44:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:56.597379 | orchestrator | 2026-04-05 03:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:44:59.649889 | orchestrator | 2026-04-05 03:44:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:44:59.651576 | orchestrator | 2026-04-05 03:44:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:44:59.651634 | orchestrator | 2026-04-05 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:02.715127 | orchestrator | 2026-04-05 03:45:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:02.716681 | orchestrator | 2026-04-05 03:45:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:02.716735 | orchestrator | 2026-04-05 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:05.764732 | orchestrator | 2026-04-05 03:45:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:05.766270 | orchestrator | 2026-04-05 03:45:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:05.766336 | orchestrator | 2026-04-05 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:08.814118 | orchestrator | 2026-04-05 03:45:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:08.816521 | orchestrator | 2026-04-05 03:45:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:08.816597 | orchestrator | 2026-04-05 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:11.876279 | orchestrator | 2026-04-05 03:45:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:11.878620 | orchestrator | 2026-04-05 03:45:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:11.878682 | orchestrator | 2026-04-05 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:14.929033 | orchestrator | 2026-04-05 03:45:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:14.932076 | orchestrator | 2026-04-05 03:45:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:14.932129 | orchestrator | 2026-04-05 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:17.984437 | orchestrator | 2026-04-05 03:45:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:17.984639 | orchestrator | 2026-04-05 03:45:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:17.984658 | orchestrator | 2026-04-05 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:21.037538 | orchestrator | 2026-04-05 03:45:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:21.038481 | orchestrator | 2026-04-05 03:45:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:21.038521 | orchestrator | 2026-04-05 03:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:24.088554 | orchestrator | 2026-04-05 03:45:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:24.090139 | orchestrator | 2026-04-05 03:45:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:24.090188 | orchestrator | 2026-04-05 03:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:27.137338 | orchestrator | 2026-04-05 03:45:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:27.139120 | orchestrator | 2026-04-05 03:45:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:27.139178 | orchestrator | 2026-04-05 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:30.182984 | orchestrator | 2026-04-05 03:45:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:30.184619 | orchestrator | 2026-04-05 03:45:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:30.184709 | orchestrator | 2026-04-05 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:33.224676 | orchestrator | 2026-04-05 03:45:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:33.225890 | orchestrator | 2026-04-05 03:45:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:33.225930 | orchestrator | 2026-04-05 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:36.281057 | orchestrator | 2026-04-05 03:45:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:36.283372 | orchestrator | 2026-04-05 03:45:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:36.283424 | orchestrator | 2026-04-05 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:39.325525 | orchestrator | 2026-04-05 03:45:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:39.326261 | orchestrator | 2026-04-05 03:45:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:39.326306 | orchestrator | 2026-04-05 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:42.372432 | orchestrator | 2026-04-05 03:45:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:42.373560 | orchestrator | 2026-04-05 03:45:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:42.373601 | orchestrator | 2026-04-05 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:45.415729 | orchestrator | 2026-04-05 03:45:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:45.417178 | orchestrator | 2026-04-05 03:45:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:45.417283 | orchestrator | 2026-04-05 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:48.467516 | orchestrator | 2026-04-05 03:45:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:48.469752 | orchestrator | 2026-04-05 03:45:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:48.469822 | orchestrator | 2026-04-05 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:51.516902 | orchestrator | 2026-04-05 03:45:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:51.518888 | orchestrator | 2026-04-05 03:45:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:51.518954 | orchestrator | 2026-04-05 03:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:54.563751 | orchestrator | 2026-04-05 03:45:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:54.565887 | orchestrator | 2026-04-05 03:45:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:54.565952 | orchestrator | 2026-04-05 03:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:45:57.616597 | orchestrator | 2026-04-05 03:45:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:45:57.619519 | orchestrator | 2026-04-05 03:45:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:45:57.619645 | orchestrator | 2026-04-05 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:00.665154 | orchestrator | 2026-04-05 03:46:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:00.666715 | orchestrator | 2026-04-05 03:46:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:00.666778 | orchestrator | 2026-04-05 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:03.702919 | orchestrator | 2026-04-05 03:46:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:03.704048 | orchestrator | 2026-04-05 03:46:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:03.704110 | orchestrator | 2026-04-05 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:06.748462 | orchestrator | 2026-04-05 03:46:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:06.750811 | orchestrator | 2026-04-05 03:46:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:06.750883 | orchestrator | 2026-04-05 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:09.792220 | orchestrator | 2026-04-05 03:46:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:09.793818 | orchestrator | 2026-04-05 03:46:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:09.793901 | orchestrator | 2026-04-05 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:12.843726 | orchestrator | 2026-04-05 03:46:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:12.845404 | orchestrator | 2026-04-05 03:46:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:12.845494 | orchestrator | 2026-04-05 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:15.882603 | orchestrator | 2026-04-05 03:46:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:15.884392 | orchestrator | 2026-04-05 03:46:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:15.884456 | orchestrator | 2026-04-05 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:18.928893 | orchestrator | 2026-04-05 03:46:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:18.929909 | orchestrator | 2026-04-05 03:46:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:18.930012 | orchestrator | 2026-04-05 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:21.982505 | orchestrator | 2026-04-05 03:46:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:21.984350 | orchestrator | 2026-04-05 03:46:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:21.984424 | orchestrator | 2026-04-05 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:25.040291 | orchestrator | 2026-04-05 03:46:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:25.041503 | orchestrator | 2026-04-05 03:46:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:25.041721 | orchestrator | 2026-04-05 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:28.085424 | orchestrator | 2026-04-05 03:46:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:28.087396 | orchestrator | 2026-04-05 03:46:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:28.087455 | orchestrator | 2026-04-05 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:31.131832 | orchestrator | 2026-04-05 03:46:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:31.135081 | orchestrator | 2026-04-05 03:46:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:31.135158 | orchestrator | 2026-04-05 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:34.182516 | orchestrator | 2026-04-05 03:46:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:34.185238 | orchestrator | 2026-04-05 03:46:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:34.185289 | orchestrator | 2026-04-05 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:37.236788 | orchestrator | 2026-04-05 03:46:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:37.239117 | orchestrator | 2026-04-05 03:46:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:37.239160 | orchestrator | 2026-04-05 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:40.287931 | orchestrator | 2026-04-05 03:46:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:40.289857 | orchestrator | 2026-04-05 03:46:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:40.289947 | orchestrator | 2026-04-05 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:43.338383 | orchestrator | 2026-04-05 03:46:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:43.339839 | orchestrator | 2026-04-05 03:46:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:43.339908 | orchestrator | 2026-04-05 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:46.384713 | orchestrator | 2026-04-05 03:46:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:46.386226 | orchestrator | 2026-04-05 03:46:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:46.386315 | orchestrator | 2026-04-05 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:49.445265 | orchestrator | 2026-04-05 03:46:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:49.447699 | orchestrator | 2026-04-05 03:46:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:49.447762 | orchestrator | 2026-04-05 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:52.498611 | orchestrator | 2026-04-05 03:46:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:52.499963 | orchestrator | 2026-04-05 03:46:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:52.500042 | orchestrator | 2026-04-05 03:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:55.554237 | orchestrator | 2026-04-05 03:46:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:55.556253 | orchestrator | 2026-04-05 03:46:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:55.556319 | orchestrator | 2026-04-05 03:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:46:58.606853 | orchestrator | 2026-04-05 03:46:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:46:58.608485 | orchestrator | 2026-04-05 03:46:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:46:58.608529 | orchestrator | 2026-04-05 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:01.654100 | orchestrator | 2026-04-05 03:47:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:01.656004 | orchestrator | 2026-04-05 03:47:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:01.656104 | orchestrator | 2026-04-05 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:04.701434 | orchestrator | 2026-04-05 03:47:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:04.703386 | orchestrator | 2026-04-05 03:47:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:04.703674 | orchestrator | 2026-04-05 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:07.761852 | orchestrator | 2026-04-05 03:47:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:07.764526 | orchestrator | 2026-04-05 03:47:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:07.764587 | orchestrator | 2026-04-05 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:10.814880 | orchestrator | 2026-04-05 03:47:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:10.818738 | orchestrator | 2026-04-05 03:47:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:10.818839 | orchestrator | 2026-04-05 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:13.868383 | orchestrator | 2026-04-05 03:47:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:13.872312 | orchestrator | 2026-04-05 03:47:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:13.872368 | orchestrator | 2026-04-05 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:16.922993 | orchestrator | 2026-04-05 03:47:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:16.924697 | orchestrator | 2026-04-05 03:47:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:16.924732 | orchestrator | 2026-04-05 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:19.980301 | orchestrator | 2026-04-05 03:47:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:19.981518 | orchestrator | 2026-04-05 03:47:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:19.981558 | orchestrator | 2026-04-05 03:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:23.032822 | orchestrator | 2026-04-05 03:47:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:23.034601 | orchestrator | 2026-04-05 03:47:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:23.034675 | orchestrator | 2026-04-05 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:26.085690 | orchestrator | 2026-04-05 03:47:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:26.086319 | orchestrator | 2026-04-05 03:47:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:26.086383 | orchestrator | 2026-04-05 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:29.135768 | orchestrator | 2026-04-05 03:47:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:29.138101 | orchestrator | 2026-04-05 03:47:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:29.139817 | orchestrator | 2026-04-05 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:32.195647 | orchestrator | 2026-04-05 03:47:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:32.196582 | orchestrator | 2026-04-05 03:47:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:32.196611 | orchestrator | 2026-04-05 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:35.248554 | orchestrator | 2026-04-05 03:47:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:35.250795 | orchestrator | 2026-04-05 03:47:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:35.250877 | orchestrator | 2026-04-05 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:38.302304 | orchestrator | 2026-04-05 03:47:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:38.304198 | orchestrator | 2026-04-05 03:47:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:38.304249 | orchestrator | 2026-04-05 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:41.363082 | orchestrator | 2026-04-05 03:47:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:41.364512 | orchestrator | 2026-04-05 03:47:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:41.364625 | orchestrator | 2026-04-05 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:44.418508 | orchestrator | 2026-04-05 03:47:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:44.420946 | orchestrator | 2026-04-05 03:47:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:44.421204 | orchestrator | 2026-04-05 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:47.464128 | orchestrator | 2026-04-05 03:47:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:47.465073 | orchestrator | 2026-04-05 03:47:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:47.465200 | orchestrator | 2026-04-05 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:50.515282 | orchestrator | 2026-04-05 03:47:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:50.518280 | orchestrator | 2026-04-05 03:47:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:50.518357 | orchestrator | 2026-04-05 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:53.566618 | orchestrator | 2026-04-05 03:47:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:53.567959 | orchestrator | 2026-04-05 03:47:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:53.567992 | orchestrator | 2026-04-05 03:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:56.620333 | orchestrator | 2026-04-05 03:47:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:56.621342 | orchestrator | 2026-04-05 03:47:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:56.621384 | orchestrator | 2026-04-05 03:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:47:59.672028 | orchestrator | 2026-04-05 03:47:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:47:59.674067 | orchestrator | 2026-04-05 03:47:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:47:59.674463 | orchestrator | 2026-04-05 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:02.725458 | orchestrator | 2026-04-05 03:48:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:02.727755 | orchestrator | 2026-04-05 03:48:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:02.727821 | orchestrator | 2026-04-05 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:05.783117 | orchestrator | 2026-04-05 03:48:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:05.785941 | orchestrator | 2026-04-05 03:48:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:05.786077 | orchestrator | 2026-04-05 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:08.841025 | orchestrator | 2026-04-05 03:48:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:08.843038 | orchestrator | 2026-04-05 03:48:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:08.843071 | orchestrator | 2026-04-05 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:11.907317 | orchestrator | 2026-04-05 03:48:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:11.909591 | orchestrator | 2026-04-05 03:48:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:11.910149 | orchestrator | 2026-04-05 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:14.952203 | orchestrator | 2026-04-05 03:48:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:14.952650 | orchestrator | 2026-04-05 03:48:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:14.952689 | orchestrator | 2026-04-05 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:17.993648 | orchestrator | 2026-04-05 03:48:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:17.996437 | orchestrator | 2026-04-05 03:48:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:17.996488 | orchestrator | 2026-04-05 03:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:21.047069 | orchestrator | 2026-04-05 03:48:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:21.048748 | orchestrator | 2026-04-05 03:48:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:21.048784 | orchestrator | 2026-04-05 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:24.088285 | orchestrator | 2026-04-05 03:48:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:24.089918 | orchestrator | 2026-04-05 03:48:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:24.089968 | orchestrator | 2026-04-05 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:27.141526 | orchestrator | 2026-04-05 03:48:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:27.144079 | orchestrator | 2026-04-05 03:48:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:27.144178 | orchestrator | 2026-04-05 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:30.194855 | orchestrator | 2026-04-05 03:48:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:30.196066 | orchestrator | 2026-04-05 03:48:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:30.196159 | orchestrator | 2026-04-05 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:33.253327 | orchestrator | 2026-04-05 03:48:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:33.258713 | orchestrator | 2026-04-05 03:48:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:33.259782 | orchestrator | 2026-04-05 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:36.313559 | orchestrator | 2026-04-05 03:48:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:36.316627 | orchestrator | 2026-04-05 03:48:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:36.316831 | orchestrator | 2026-04-05 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:39.360501 | orchestrator | 2026-04-05 03:48:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:39.363340 | orchestrator | 2026-04-05 03:48:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:39.363435 | orchestrator | 2026-04-05 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:42.415984 | orchestrator | 2026-04-05 03:48:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:42.420247 | orchestrator | 2026-04-05 03:48:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:42.420323 | orchestrator | 2026-04-05 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:45.465443 | orchestrator | 2026-04-05 03:48:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:45.467094 | orchestrator | 2026-04-05 03:48:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:45.467228 | orchestrator | 2026-04-05 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:48.528523 | orchestrator | 2026-04-05 03:48:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:48.530469 | orchestrator | 2026-04-05 03:48:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:48.530526 | orchestrator | 2026-04-05 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:51.579419 | orchestrator | 2026-04-05 03:48:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:51.579523 | orchestrator | 2026-04-05 03:48:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:51.579541 | orchestrator | 2026-04-05 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:54.629849 | orchestrator | 2026-04-05 03:48:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:54.632471 | orchestrator | 2026-04-05 03:48:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:54.632553 | orchestrator | 2026-04-05 03:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:48:57.693228 | orchestrator | 2026-04-05 03:48:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:48:57.695340 | orchestrator | 2026-04-05 03:48:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:48:57.695422 | orchestrator | 2026-04-05 03:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:00.744597 | orchestrator | 2026-04-05 03:49:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:00.746272 | orchestrator | 2026-04-05 03:49:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:00.746355 | orchestrator | 2026-04-05 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:03.793041 | orchestrator | 2026-04-05 03:49:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:03.794251 | orchestrator | 2026-04-05 03:49:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:03.794303 | orchestrator | 2026-04-05 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:06.833222 | orchestrator | 2026-04-05 03:49:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:06.835432 | orchestrator | 2026-04-05 03:49:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:06.835511 | orchestrator | 2026-04-05 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:09.887143 | orchestrator | 2026-04-05 03:49:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:09.889727 | orchestrator | 2026-04-05 03:49:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:09.889786 | orchestrator | 2026-04-05 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:12.940584 | orchestrator | 2026-04-05 03:49:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:12.942312 | orchestrator | 2026-04-05 03:49:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:12.942341 | orchestrator | 2026-04-05 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:15.980242 | orchestrator | 2026-04-05 03:49:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:15.981599 | orchestrator | 2026-04-05 03:49:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:15.981661 | orchestrator | 2026-04-05 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:19.032487 | orchestrator | 2026-04-05 03:49:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:19.034483 | orchestrator | 2026-04-05 03:49:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:19.034591 | orchestrator | 2026-04-05 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:22.082381 | orchestrator | 2026-04-05 03:49:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:22.084379 | orchestrator | 2026-04-05 03:49:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:22.084424 | orchestrator | 2026-04-05 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:25.126987 | orchestrator | 2026-04-05 03:49:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:25.129067 | orchestrator | 2026-04-05 03:49:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:25.129613 | orchestrator | 2026-04-05 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:28.184024 | orchestrator | 2026-04-05 03:49:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:28.185652 | orchestrator | 2026-04-05 03:49:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:28.185698 | orchestrator | 2026-04-05 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:31.233313 | orchestrator | 2026-04-05 03:49:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:31.233832 | orchestrator | 2026-04-05 03:49:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:31.233865 | orchestrator | 2026-04-05 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:34.279905 | orchestrator | 2026-04-05 03:49:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:34.281546 | orchestrator | 2026-04-05 03:49:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:34.281609 | orchestrator | 2026-04-05 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:37.336174 | orchestrator | 2026-04-05 03:49:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:37.337869 | orchestrator | 2026-04-05 03:49:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:37.337977 | orchestrator | 2026-04-05 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:40.390445 | orchestrator | 2026-04-05 03:49:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:40.392850 | orchestrator | 2026-04-05 03:49:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:40.392932 | orchestrator | 2026-04-05 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:43.441225 | orchestrator | 2026-04-05 03:49:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:43.441942 | orchestrator | 2026-04-05 03:49:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:43.441972 | orchestrator | 2026-04-05 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:46.486458 | orchestrator | 2026-04-05 03:49:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:46.488330 | orchestrator | 2026-04-05 03:49:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:46.488438 | orchestrator | 2026-04-05 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:49.534224 | orchestrator | 2026-04-05 03:49:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:49.536747 | orchestrator | 2026-04-05 03:49:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:49.536816 | orchestrator | 2026-04-05 03:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:52.576136 | orchestrator | 2026-04-05 03:49:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:52.577169 | orchestrator | 2026-04-05 03:49:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:52.577223 | orchestrator | 2026-04-05 03:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:55.623394 | orchestrator | 2026-04-05 03:49:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:55.625699 | orchestrator | 2026-04-05 03:49:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:55.625763 | orchestrator | 2026-04-05 03:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:49:58.685220 | orchestrator | 2026-04-05 03:49:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:49:58.686627 | orchestrator | 2026-04-05 03:49:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:49:58.686654 | orchestrator | 2026-04-05 03:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:01.730453 | orchestrator | 2026-04-05 03:50:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:01.732719 | orchestrator | 2026-04-05 03:50:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:01.732778 | orchestrator | 2026-04-05 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:04.783924 | orchestrator | 2026-04-05 03:50:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:04.786101 | orchestrator | 2026-04-05 03:50:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:04.786160 | orchestrator | 2026-04-05 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:07.839275 | orchestrator | 2026-04-05 03:50:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:07.841427 | orchestrator | 2026-04-05 03:50:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:07.841474 | orchestrator | 2026-04-05 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:10.888045 | orchestrator | 2026-04-05 03:50:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:10.891111 | orchestrator | 2026-04-05 03:50:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:10.891187 | orchestrator | 2026-04-05 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:13.951208 | orchestrator | 2026-04-05 03:50:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:13.952988 | orchestrator | 2026-04-05 03:50:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:13.953205 | orchestrator | 2026-04-05 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:17.006085 | orchestrator | 2026-04-05 03:50:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:17.009328 | orchestrator | 2026-04-05 03:50:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:17.009432 | orchestrator | 2026-04-05 03:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:20.059654 | orchestrator | 2026-04-05 03:50:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:20.061617 | orchestrator | 2026-04-05 03:50:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:20.061676 | orchestrator | 2026-04-05 03:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:23.111674 | orchestrator | 2026-04-05 03:50:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:23.112908 | orchestrator | 2026-04-05 03:50:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:23.113149 | orchestrator | 2026-04-05 03:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:26.155907 | orchestrator | 2026-04-05 03:50:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:26.157406 | orchestrator | 2026-04-05 03:50:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:26.157493 | orchestrator | 2026-04-05 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:29.211925 | orchestrator | 2026-04-05 03:50:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:29.215572 | orchestrator | 2026-04-05 03:50:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:29.215660 | orchestrator | 2026-04-05 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:32.255398 | orchestrator | 2026-04-05 03:50:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:32.257180 | orchestrator | 2026-04-05 03:50:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:32.257238 | orchestrator | 2026-04-05 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:35.303158 | orchestrator | 2026-04-05 03:50:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:35.307359 | orchestrator | 2026-04-05 03:50:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:35.307437 | orchestrator | 2026-04-05 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:38.366555 | orchestrator | 2026-04-05 03:50:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:38.367419 | orchestrator | 2026-04-05 03:50:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:38.367470 | orchestrator | 2026-04-05 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:41.404187 | orchestrator | 2026-04-05 03:50:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:41.405013 | orchestrator | 2026-04-05 03:50:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:41.405130 | orchestrator | 2026-04-05 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:44.448114 | orchestrator | 2026-04-05 03:50:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:44.450884 | orchestrator | 2026-04-05 03:50:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:44.450933 | orchestrator | 2026-04-05 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:47.501133 | orchestrator | 2026-04-05 03:50:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:47.503194 | orchestrator | 2026-04-05 03:50:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:47.503250 | orchestrator | 2026-04-05 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:50.555642 | orchestrator | 2026-04-05 03:50:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:50.557871 | orchestrator | 2026-04-05 03:50:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:50.557937 | orchestrator | 2026-04-05 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:53.600665 | orchestrator | 2026-04-05 03:50:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:53.601492 | orchestrator | 2026-04-05 03:50:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:53.601520 | orchestrator | 2026-04-05 03:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:56.641827 | orchestrator | 2026-04-05 03:50:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:56.642774 | orchestrator | 2026-04-05 03:50:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:56.642826 | orchestrator | 2026-04-05 03:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:50:59.685648 | orchestrator | 2026-04-05 03:50:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:50:59.687377 | orchestrator | 2026-04-05 03:50:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:50:59.687512 | orchestrator | 2026-04-05 03:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:02.732519 | orchestrator | 2026-04-05 03:51:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:02.735342 | orchestrator | 2026-04-05 03:51:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:02.735457 | orchestrator | 2026-04-05 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:05.779950 | orchestrator | 2026-04-05 03:51:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:05.781046 | orchestrator | 2026-04-05 03:51:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:05.781087 | orchestrator | 2026-04-05 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:08.820305 | orchestrator | 2026-04-05 03:51:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:08.821423 | orchestrator | 2026-04-05 03:51:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:08.821479 | orchestrator | 2026-04-05 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:11.875423 | orchestrator | 2026-04-05 03:51:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:11.878426 | orchestrator | 2026-04-05 03:51:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:11.878533 | orchestrator | 2026-04-05 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:14.923058 | orchestrator | 2026-04-05 03:51:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:14.925416 | orchestrator | 2026-04-05 03:51:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:14.925501 | orchestrator | 2026-04-05 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:17.982370 | orchestrator | 2026-04-05 03:51:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:17.983455 | orchestrator | 2026-04-05 03:51:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:17.983485 | orchestrator | 2026-04-05 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:21.045732 | orchestrator | 2026-04-05 03:51:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:21.047786 | orchestrator | 2026-04-05 03:51:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:21.047816 | orchestrator | 2026-04-05 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:24.094847 | orchestrator | 2026-04-05 03:51:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:24.095680 | orchestrator | 2026-04-05 03:51:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:24.095711 | orchestrator | 2026-04-05 03:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:27.141146 | orchestrator | 2026-04-05 03:51:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:27.142926 | orchestrator | 2026-04-05 03:51:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:27.143035 | orchestrator | 2026-04-05 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:30.194241 | orchestrator | 2026-04-05 03:51:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:30.198267 | orchestrator | 2026-04-05 03:51:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:30.198376 | orchestrator | 2026-04-05 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:33.253661 | orchestrator | 2026-04-05 03:51:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:33.255741 | orchestrator | 2026-04-05 03:51:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:33.255847 | orchestrator | 2026-04-05 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:36.297299 | orchestrator | 2026-04-05 03:51:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:36.300354 | orchestrator | 2026-04-05 03:51:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:36.300409 | orchestrator | 2026-04-05 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:39.348782 | orchestrator | 2026-04-05 03:51:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:39.350629 | orchestrator | 2026-04-05 03:51:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:39.350686 | orchestrator | 2026-04-05 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:42.397459 | orchestrator | 2026-04-05 03:51:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:42.399362 | orchestrator | 2026-04-05 03:51:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:42.399451 | orchestrator | 2026-04-05 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:45.456051 | orchestrator | 2026-04-05 03:51:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:45.458120 | orchestrator | 2026-04-05 03:51:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:45.458922 | orchestrator | 2026-04-05 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:48.505902 | orchestrator | 2026-04-05 03:51:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:48.507373 | orchestrator | 2026-04-05 03:51:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:48.507418 | orchestrator | 2026-04-05 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:51.549752 | orchestrator | 2026-04-05 03:51:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:51.551724 | orchestrator | 2026-04-05 03:51:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:51.551838 | orchestrator | 2026-04-05 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:54.596909 | orchestrator | 2026-04-05 03:51:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:54.600094 | orchestrator | 2026-04-05 03:51:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:54.600220 | orchestrator | 2026-04-05 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:51:57.655323 | orchestrator | 2026-04-05 03:51:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:51:57.658601 | orchestrator | 2026-04-05 03:51:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:51:57.658719 | orchestrator | 2026-04-05 03:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:00.714418 | orchestrator | 2026-04-05 03:52:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:00.716517 | orchestrator | 2026-04-05 03:52:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:00.717130 | orchestrator | 2026-04-05 03:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:03.760500 | orchestrator | 2026-04-05 03:52:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:03.762126 | orchestrator | 2026-04-05 03:52:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:03.762179 | orchestrator | 2026-04-05 03:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:06.806330 | orchestrator | 2026-04-05 03:52:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:06.807805 | orchestrator | 2026-04-05 03:52:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:06.807879 | orchestrator | 2026-04-05 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:09.849226 | orchestrator | 2026-04-05 03:52:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:09.851516 | orchestrator | 2026-04-05 03:52:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:09.851598 | orchestrator | 2026-04-05 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:12.898592 | orchestrator | 2026-04-05 03:52:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:12.900495 | orchestrator | 2026-04-05 03:52:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:12.900553 | orchestrator | 2026-04-05 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:15.950217 | orchestrator | 2026-04-05 03:52:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:15.952356 | orchestrator | 2026-04-05 03:52:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:15.952441 | orchestrator | 2026-04-05 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:19.008274 | orchestrator | 2026-04-05 03:52:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:19.010444 | orchestrator | 2026-04-05 03:52:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:19.010527 | orchestrator | 2026-04-05 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:22.054099 | orchestrator | 2026-04-05 03:52:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:22.054918 | orchestrator | 2026-04-05 03:52:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:22.054958 | orchestrator | 2026-04-05 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:25.096918 | orchestrator | 2026-04-05 03:52:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:25.097866 | orchestrator | 2026-04-05 03:52:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:25.097938 | orchestrator | 2026-04-05 03:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:28.145841 | orchestrator | 2026-04-05 03:52:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:28.147392 | orchestrator | 2026-04-05 03:52:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:28.147455 | orchestrator | 2026-04-05 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:31.197890 | orchestrator | 2026-04-05 03:52:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:31.198252 | orchestrator | 2026-04-05 03:52:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:31.198282 | orchestrator | 2026-04-05 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:34.235744 | orchestrator | 2026-04-05 03:52:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:34.236558 | orchestrator | 2026-04-05 03:52:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:34.236597 | orchestrator | 2026-04-05 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:37.284447 | orchestrator | 2026-04-05 03:52:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:37.287361 | orchestrator | 2026-04-05 03:52:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:37.287471 | orchestrator | 2026-04-05 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:40.336155 | orchestrator | 2026-04-05 03:52:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:40.340150 | orchestrator | 2026-04-05 03:52:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:40.340252 | orchestrator | 2026-04-05 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:43.382276 | orchestrator | 2026-04-05 03:52:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:43.382712 | orchestrator | 2026-04-05 03:52:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:43.382733 | orchestrator | 2026-04-05 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:46.426875 | orchestrator | 2026-04-05 03:52:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:46.427200 | orchestrator | 2026-04-05 03:52:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:46.427233 | orchestrator | 2026-04-05 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:49.470513 | orchestrator | 2026-04-05 03:52:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:49.471039 | orchestrator | 2026-04-05 03:52:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:49.471430 | orchestrator | 2026-04-05 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:52.513396 | orchestrator | 2026-04-05 03:52:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:52.514458 | orchestrator | 2026-04-05 03:52:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:52.514590 | orchestrator | 2026-04-05 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:55.553333 | orchestrator | 2026-04-05 03:52:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:55.553920 | orchestrator | 2026-04-05 03:52:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:55.554001 | orchestrator | 2026-04-05 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:52:58.590859 | orchestrator | 2026-04-05 03:52:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:52:58.591038 | orchestrator | 2026-04-05 03:52:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:52:58.591056 | orchestrator | 2026-04-05 03:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:01.634588 | orchestrator | 2026-04-05 03:53:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:01.636208 | orchestrator | 2026-04-05 03:53:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:01.636287 | orchestrator | 2026-04-05 03:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:04.681389 | orchestrator | 2026-04-05 03:53:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:04.682367 | orchestrator | 2026-04-05 03:53:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:04.682418 | orchestrator | 2026-04-05 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:07.730780 | orchestrator | 2026-04-05 03:53:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:07.733652 | orchestrator | 2026-04-05 03:53:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:07.733761 | orchestrator | 2026-04-05 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:10.783069 | orchestrator | 2026-04-05 03:53:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:10.785128 | orchestrator | 2026-04-05 03:53:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:10.785628 | orchestrator | 2026-04-05 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:13.837109 | orchestrator | 2026-04-05 03:53:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:13.840336 | orchestrator | 2026-04-05 03:53:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:13.840392 | orchestrator | 2026-04-05 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:16.888487 | orchestrator | 2026-04-05 03:53:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:16.892830 | orchestrator | 2026-04-05 03:53:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:16.892905 | orchestrator | 2026-04-05 03:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:19.950231 | orchestrator | 2026-04-05 03:53:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:19.953281 | orchestrator | 2026-04-05 03:53:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:19.953431 | orchestrator | 2026-04-05 03:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:22.998003 | orchestrator | 2026-04-05 03:53:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:23.003850 | orchestrator | 2026-04-05 03:53:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:23.003980 | orchestrator | 2026-04-05 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:26.046074 | orchestrator | 2026-04-05 03:53:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:26.047246 | orchestrator | 2026-04-05 03:53:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:26.047301 | orchestrator | 2026-04-05 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:29.092284 | orchestrator | 2026-04-05 03:53:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:29.093513 | orchestrator | 2026-04-05 03:53:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:29.093594 | orchestrator | 2026-04-05 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:32.146405 | orchestrator | 2026-04-05 03:53:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:32.149237 | orchestrator | 2026-04-05 03:53:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:32.149337 | orchestrator | 2026-04-05 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:35.195946 | orchestrator | 2026-04-05 03:53:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:35.197672 | orchestrator | 2026-04-05 03:53:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:35.197715 | orchestrator | 2026-04-05 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:38.237401 | orchestrator | 2026-04-05 03:53:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:38.240043 | orchestrator | 2026-04-05 03:53:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:38.240111 | orchestrator | 2026-04-05 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:41.286106 | orchestrator | 2026-04-05 03:53:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:41.289086 | orchestrator | 2026-04-05 03:53:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:41.289118 | orchestrator | 2026-04-05 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:44.338237 | orchestrator | 2026-04-05 03:53:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:44.340191 | orchestrator | 2026-04-05 03:53:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:44.340391 | orchestrator | 2026-04-05 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:47.384519 | orchestrator | 2026-04-05 03:53:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:47.385522 | orchestrator | 2026-04-05 03:53:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:47.385619 | orchestrator | 2026-04-05 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:50.438192 | orchestrator | 2026-04-05 03:53:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:50.440252 | orchestrator | 2026-04-05 03:53:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:50.440315 | orchestrator | 2026-04-05 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:53.483047 | orchestrator | 2026-04-05 03:53:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:53.484633 | orchestrator | 2026-04-05 03:53:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:53.484680 | orchestrator | 2026-04-05 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:56.531240 | orchestrator | 2026-04-05 03:53:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:56.531493 | orchestrator | 2026-04-05 03:53:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:56.531521 | orchestrator | 2026-04-05 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:53:59.576766 | orchestrator | 2026-04-05 03:53:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:53:59.578561 | orchestrator | 2026-04-05 03:53:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:53:59.578657 | orchestrator | 2026-04-05 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:02.625947 | orchestrator | 2026-04-05 03:54:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:02.627666 | orchestrator | 2026-04-05 03:54:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:02.627736 | orchestrator | 2026-04-05 03:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:05.686637 | orchestrator | 2026-04-05 03:54:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:05.689056 | orchestrator | 2026-04-05 03:54:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:05.689106 | orchestrator | 2026-04-05 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:08.737199 | orchestrator | 2026-04-05 03:54:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:08.738898 | orchestrator | 2026-04-05 03:54:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:08.738993 | orchestrator | 2026-04-05 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:11.787830 | orchestrator | 2026-04-05 03:54:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:11.791008 | orchestrator | 2026-04-05 03:54:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:11.791074 | orchestrator | 2026-04-05 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:14.838302 | orchestrator | 2026-04-05 03:54:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:14.839961 | orchestrator | 2026-04-05 03:54:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:14.840014 | orchestrator | 2026-04-05 03:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:17.892998 | orchestrator | 2026-04-05 03:54:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:17.895821 | orchestrator | 2026-04-05 03:54:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:17.896039 | orchestrator | 2026-04-05 03:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:20.943368 | orchestrator | 2026-04-05 03:54:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:20.944780 | orchestrator | 2026-04-05 03:54:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:20.944871 | orchestrator | 2026-04-05 03:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:23.993650 | orchestrator | 2026-04-05 03:54:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:23.994832 | orchestrator | 2026-04-05 03:54:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:23.994993 | orchestrator | 2026-04-05 03:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:27.041444 | orchestrator | 2026-04-05 03:54:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:27.043771 | orchestrator | 2026-04-05 03:54:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:27.043851 | orchestrator | 2026-04-05 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:30.090471 | orchestrator | 2026-04-05 03:54:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:30.093156 | orchestrator | 2026-04-05 03:54:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:30.093193 | orchestrator | 2026-04-05 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:33.137215 | orchestrator | 2026-04-05 03:54:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:33.137659 | orchestrator | 2026-04-05 03:54:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:33.137694 | orchestrator | 2026-04-05 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:36.179168 | orchestrator | 2026-04-05 03:54:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:36.179831 | orchestrator | 2026-04-05 03:54:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:36.179854 | orchestrator | 2026-04-05 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:39.225890 | orchestrator | 2026-04-05 03:54:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:39.227881 | orchestrator | 2026-04-05 03:54:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:39.228420 | orchestrator | 2026-04-05 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:42.272546 | orchestrator | 2026-04-05 03:54:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:42.273586 | orchestrator | 2026-04-05 03:54:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:42.273637 | orchestrator | 2026-04-05 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:45.327582 | orchestrator | 2026-04-05 03:54:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:45.329165 | orchestrator | 2026-04-05 03:54:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:45.329680 | orchestrator | 2026-04-05 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:48.374283 | orchestrator | 2026-04-05 03:54:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:48.375897 | orchestrator | 2026-04-05 03:54:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:48.375924 | orchestrator | 2026-04-05 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:51.422471 | orchestrator | 2026-04-05 03:54:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:51.424539 | orchestrator | 2026-04-05 03:54:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:51.424585 | orchestrator | 2026-04-05 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:54.470069 | orchestrator | 2026-04-05 03:54:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:54.470390 | orchestrator | 2026-04-05 03:54:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:54.470432 | orchestrator | 2026-04-05 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:54:57.517485 | orchestrator | 2026-04-05 03:54:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:54:57.522600 | orchestrator | 2026-04-05 03:54:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:54:57.522695 | orchestrator | 2026-04-05 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:00.577009 | orchestrator | 2026-04-05 03:55:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:00.578482 | orchestrator | 2026-04-05 03:55:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:00.578587 | orchestrator | 2026-04-05 03:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:03.635458 | orchestrator | 2026-04-05 03:55:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:03.637368 | orchestrator | 2026-04-05 03:55:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:03.637415 | orchestrator | 2026-04-05 03:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:06.695550 | orchestrator | 2026-04-05 03:55:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:06.697294 | orchestrator | 2026-04-05 03:55:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:06.697366 | orchestrator | 2026-04-05 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:09.743221 | orchestrator | 2026-04-05 03:55:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:09.745611 | orchestrator | 2026-04-05 03:55:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:09.745698 | orchestrator | 2026-04-05 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:12.794314 | orchestrator | 2026-04-05 03:55:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:12.798249 | orchestrator | 2026-04-05 03:55:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:12.798325 | orchestrator | 2026-04-05 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:15.841877 | orchestrator | 2026-04-05 03:55:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:15.845291 | orchestrator | 2026-04-05 03:55:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:15.845426 | orchestrator | 2026-04-05 03:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:18.881527 | orchestrator | 2026-04-05 03:55:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:18.881929 | orchestrator | 2026-04-05 03:55:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:18.881980 | orchestrator | 2026-04-05 03:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:21.935787 | orchestrator | 2026-04-05 03:55:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:21.941795 | orchestrator | 2026-04-05 03:55:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:21.941889 | orchestrator | 2026-04-05 03:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:24.995103 | orchestrator | 2026-04-05 03:55:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:24.997372 | orchestrator | 2026-04-05 03:55:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:24.997451 | orchestrator | 2026-04-05 03:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:28.045607 | orchestrator | 2026-04-05 03:55:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:28.048012 | orchestrator | 2026-04-05 03:55:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:28.048125 | orchestrator | 2026-04-05 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:31.099806 | orchestrator | 2026-04-05 03:55:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:31.101405 | orchestrator | 2026-04-05 03:55:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:31.101449 | orchestrator | 2026-04-05 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:34.160626 | orchestrator | 2026-04-05 03:55:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:34.163224 | orchestrator | 2026-04-05 03:55:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:34.163281 | orchestrator | 2026-04-05 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:37.216811 | orchestrator | 2026-04-05 03:55:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:37.219773 | orchestrator | 2026-04-05 03:55:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:37.219834 | orchestrator | 2026-04-05 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:40.270859 | orchestrator | 2026-04-05 03:55:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:40.272812 | orchestrator | 2026-04-05 03:55:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:40.272854 | orchestrator | 2026-04-05 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:43.324679 | orchestrator | 2026-04-05 03:55:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:43.325788 | orchestrator | 2026-04-05 03:55:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:43.325834 | orchestrator | 2026-04-05 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:46.372005 | orchestrator | 2026-04-05 03:55:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:46.374763 | orchestrator | 2026-04-05 03:55:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:46.374818 | orchestrator | 2026-04-05 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:49.417013 | orchestrator | 2026-04-05 03:55:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:49.419055 | orchestrator | 2026-04-05 03:55:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:49.419125 | orchestrator | 2026-04-05 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:52.468157 | orchestrator | 2026-04-05 03:55:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:52.470088 | orchestrator | 2026-04-05 03:55:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:52.470158 | orchestrator | 2026-04-05 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:55.526617 | orchestrator | 2026-04-05 03:55:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:55.530340 | orchestrator | 2026-04-05 03:55:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:55.530394 | orchestrator | 2026-04-05 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:55:58.588677 | orchestrator | 2026-04-05 03:55:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:55:58.590525 | orchestrator | 2026-04-05 03:55:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:55:58.590627 | orchestrator | 2026-04-05 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:01.642892 | orchestrator | 2026-04-05 03:56:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:01.644880 | orchestrator | 2026-04-05 03:56:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:01.644938 | orchestrator | 2026-04-05 03:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:04.696111 | orchestrator | 2026-04-05 03:56:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:04.697452 | orchestrator | 2026-04-05 03:56:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:04.697518 | orchestrator | 2026-04-05 03:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:07.743174 | orchestrator | 2026-04-05 03:56:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:07.746618 | orchestrator | 2026-04-05 03:56:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:07.746698 | orchestrator | 2026-04-05 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:10.799568 | orchestrator | 2026-04-05 03:56:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:10.803107 | orchestrator | 2026-04-05 03:56:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:10.803249 | orchestrator | 2026-04-05 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:13.852525 | orchestrator | 2026-04-05 03:56:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:13.855087 | orchestrator | 2026-04-05 03:56:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:13.855275 | orchestrator | 2026-04-05 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:16.907989 | orchestrator | 2026-04-05 03:56:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:16.910285 | orchestrator | 2026-04-05 03:56:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:16.910346 | orchestrator | 2026-04-05 03:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:19.954350 | orchestrator | 2026-04-05 03:56:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:19.956155 | orchestrator | 2026-04-05 03:56:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:19.956267 | orchestrator | 2026-04-05 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:22.997800 | orchestrator | 2026-04-05 03:56:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:23.003143 | orchestrator | 2026-04-05 03:56:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:23.003287 | orchestrator | 2026-04-05 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:26.042910 | orchestrator | 2026-04-05 03:56:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:26.044670 | orchestrator | 2026-04-05 03:56:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:26.044899 | orchestrator | 2026-04-05 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:29.098708 | orchestrator | 2026-04-05 03:56:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:29.100394 | orchestrator | 2026-04-05 03:56:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:29.183727 | orchestrator | 2026-04-05 03:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:32.149063 | orchestrator | 2026-04-05 03:56:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:32.151336 | orchestrator | 2026-04-05 03:56:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:32.151397 | orchestrator | 2026-04-05 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:35.201757 | orchestrator | 2026-04-05 03:56:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:35.204640 | orchestrator | 2026-04-05 03:56:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:35.204746 | orchestrator | 2026-04-05 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:38.253049 | orchestrator | 2026-04-05 03:56:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:38.255733 | orchestrator | 2026-04-05 03:56:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:38.255791 | orchestrator | 2026-04-05 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:41.298592 | orchestrator | 2026-04-05 03:56:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:41.299889 | orchestrator | 2026-04-05 03:56:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:41.299940 | orchestrator | 2026-04-05 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:44.357644 | orchestrator | 2026-04-05 03:56:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:44.360245 | orchestrator | 2026-04-05 03:56:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:44.360311 | orchestrator | 2026-04-05 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:47.409102 | orchestrator | 2026-04-05 03:56:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:47.411256 | orchestrator | 2026-04-05 03:56:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:47.411379 | orchestrator | 2026-04-05 03:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:50.466753 | orchestrator | 2026-04-05 03:56:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:50.468053 | orchestrator | 2026-04-05 03:56:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:50.468092 | orchestrator | 2026-04-05 03:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:53.516682 | orchestrator | 2026-04-05 03:56:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:53.518575 | orchestrator | 2026-04-05 03:56:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:53.518625 | orchestrator | 2026-04-05 03:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:56.568919 | orchestrator | 2026-04-05 03:56:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:56.571701 | orchestrator | 2026-04-05 03:56:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:56.571772 | orchestrator | 2026-04-05 03:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:56:59.618534 | orchestrator | 2026-04-05 03:56:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:56:59.620295 | orchestrator | 2026-04-05 03:56:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:56:59.620361 | orchestrator | 2026-04-05 03:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:57:02.656766 | orchestrator | 2026-04-05 03:57:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:57:02.657970 | orchestrator | 2026-04-05 03:57:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:57:02.658084 | orchestrator | 2026-04-05 03:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:57:05.710663 | orchestrator | 2026-04-05 03:57:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:57:05.713366 | orchestrator | 2026-04-05 03:57:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:57:05.713499 | orchestrator | 2026-04-05 03:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:57:08.769819 | orchestrator | 2026-04-05 03:57:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:57:08.773152 | orchestrator | 2026-04-05 03:57:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:57:08.773258 | orchestrator | 2026-04-05 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:57:11.826578 | orchestrator | 2026-04-05 03:57:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:57:11.828110 | orchestrator | 2026-04-05 03:57:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:57:11.828168 | orchestrator | 2026-04-05 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:57:14.876539 | orchestrator | 2026-04-05 03:57:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:57:14.877510 | orchestrator | 2026-04-05 03:57:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:57:14.877669 | orchestrator | 2026-04-05 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:18.040344 | orchestrator | 2026-04-05 03:59:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:18.040603 | orchestrator | 2026-04-05 03:59:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:18.040628 | orchestrator | 2026-04-05 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:21.087222 | orchestrator | 2026-04-05 03:59:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:21.088464 | orchestrator | 2026-04-05 03:59:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:21.088601 | orchestrator | 2026-04-05 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:24.123393 | orchestrator | 2026-04-05 03:59:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:24.125846 | orchestrator | 2026-04-05 03:59:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:24.125979 | orchestrator | 2026-04-05 03:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:27.163354 | orchestrator | 2026-04-05 03:59:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:27.164683 | orchestrator | 2026-04-05 03:59:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:27.164739 | orchestrator | 2026-04-05 03:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:30.201000 | orchestrator | 2026-04-05 03:59:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:30.202824 | orchestrator | 2026-04-05 03:59:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:30.202905 | orchestrator | 2026-04-05 03:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:33.243981 | orchestrator | 2026-04-05 03:59:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:33.246400 | orchestrator | 2026-04-05 03:59:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:33.246650 | orchestrator | 2026-04-05 03:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:36.293172 | orchestrator | 2026-04-05 03:59:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:36.295945 | orchestrator | 2026-04-05 03:59:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:36.296038 | orchestrator | 2026-04-05 03:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:39.343134 | orchestrator | 2026-04-05 03:59:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:39.344147 | orchestrator | 2026-04-05 03:59:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:39.344218 | orchestrator | 2026-04-05 03:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:42.392020 | orchestrator | 2026-04-05 03:59:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:42.393031 | orchestrator | 2026-04-05 03:59:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:42.393048 | orchestrator | 2026-04-05 03:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:45.431857 | orchestrator | 2026-04-05 03:59:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:45.432588 | orchestrator | 2026-04-05 03:59:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:45.432636 | orchestrator | 2026-04-05 03:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:48.473990 | orchestrator | 2026-04-05 03:59:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:48.475257 | orchestrator | 2026-04-05 03:59:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:48.475291 | orchestrator | 2026-04-05 03:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:51.517768 | orchestrator | 2026-04-05 03:59:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:51.518274 | orchestrator | 2026-04-05 03:59:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:51.518302 | orchestrator | 2026-04-05 03:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:54.558836 | orchestrator | 2026-04-05 03:59:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:54.560719 | orchestrator | 2026-04-05 03:59:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:54.560824 | orchestrator | 2026-04-05 03:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 03:59:57.607419 | orchestrator | 2026-04-05 03:59:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 03:59:57.608956 | orchestrator | 2026-04-05 03:59:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 03:59:57.609033 | orchestrator | 2026-04-05 03:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:00.648742 | orchestrator | 2026-04-05 04:00:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:00.650599 | orchestrator | 2026-04-05 04:00:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:00.650722 | orchestrator | 2026-04-05 04:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:03.686245 | orchestrator | 2026-04-05 04:00:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:03.686990 | orchestrator | 2026-04-05 04:00:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:03.687034 | orchestrator | 2026-04-05 04:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:06.727260 | orchestrator | 2026-04-05 04:00:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:06.730523 | orchestrator | 2026-04-05 04:00:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:06.730683 | orchestrator | 2026-04-05 04:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:09.774312 | orchestrator | 2026-04-05 04:00:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:09.776839 | orchestrator | 2026-04-05 04:00:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:09.776913 | orchestrator | 2026-04-05 04:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:12.820911 | orchestrator | 2026-04-05 04:00:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:12.821822 | orchestrator | 2026-04-05 04:00:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:12.821885 | orchestrator | 2026-04-05 04:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:15.867947 | orchestrator | 2026-04-05 04:00:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:15.871597 | orchestrator | 2026-04-05 04:00:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:15.871748 | orchestrator | 2026-04-05 04:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:18.914764 | orchestrator | 2026-04-05 04:00:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:18.918171 | orchestrator | 2026-04-05 04:00:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:18.918232 | orchestrator | 2026-04-05 04:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:21.969058 | orchestrator | 2026-04-05 04:00:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:21.971060 | orchestrator | 2026-04-05 04:00:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:21.971178 | orchestrator | 2026-04-05 04:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:25.021995 | orchestrator | 2026-04-05 04:00:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:25.023190 | orchestrator | 2026-04-05 04:00:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:25.023225 | orchestrator | 2026-04-05 04:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:28.071977 | orchestrator | 2026-04-05 04:00:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:28.074866 | orchestrator | 2026-04-05 04:00:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:28.074946 | orchestrator | 2026-04-05 04:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:31.119488 | orchestrator | 2026-04-05 04:00:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:31.120853 | orchestrator | 2026-04-05 04:00:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:31.120933 | orchestrator | 2026-04-05 04:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:34.161648 | orchestrator | 2026-04-05 04:00:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:34.162844 | orchestrator | 2026-04-05 04:00:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:34.162902 | orchestrator | 2026-04-05 04:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:37.206891 | orchestrator | 2026-04-05 04:00:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:37.207231 | orchestrator | 2026-04-05 04:00:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:37.207272 | orchestrator | 2026-04-05 04:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:40.256566 | orchestrator | 2026-04-05 04:00:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:40.257650 | orchestrator | 2026-04-05 04:00:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:40.257754 | orchestrator | 2026-04-05 04:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:43.295797 | orchestrator | 2026-04-05 04:00:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:43.297822 | orchestrator | 2026-04-05 04:00:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:43.297901 | orchestrator | 2026-04-05 04:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:46.343730 | orchestrator | 2026-04-05 04:00:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:46.345304 | orchestrator | 2026-04-05 04:00:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:46.345373 | orchestrator | 2026-04-05 04:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:49.392802 | orchestrator | 2026-04-05 04:00:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:49.394830 | orchestrator | 2026-04-05 04:00:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:49.394907 | orchestrator | 2026-04-05 04:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:52.430253 | orchestrator | 2026-04-05 04:00:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:52.430983 | orchestrator | 2026-04-05 04:00:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:52.431035 | orchestrator | 2026-04-05 04:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:55.471837 | orchestrator | 2026-04-05 04:00:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:55.473838 | orchestrator | 2026-04-05 04:00:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:55.473891 | orchestrator | 2026-04-05 04:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:00:58.526517 | orchestrator | 2026-04-05 04:00:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:00:58.529487 | orchestrator | 2026-04-05 04:00:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:00:58.529589 | orchestrator | 2026-04-05 04:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:01.571973 | orchestrator | 2026-04-05 04:01:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:01.573263 | orchestrator | 2026-04-05 04:01:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:01.573310 | orchestrator | 2026-04-05 04:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:04.624290 | orchestrator | 2026-04-05 04:01:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:04.626792 | orchestrator | 2026-04-05 04:01:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:04.626881 | orchestrator | 2026-04-05 04:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:07.673674 | orchestrator | 2026-04-05 04:01:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:07.675533 | orchestrator | 2026-04-05 04:01:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:07.675643 | orchestrator | 2026-04-05 04:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:10.732123 | orchestrator | 2026-04-05 04:01:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:10.734004 | orchestrator | 2026-04-05 04:01:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:10.734120 | orchestrator | 2026-04-05 04:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:13.783151 | orchestrator | 2026-04-05 04:01:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:13.784749 | orchestrator | 2026-04-05 04:01:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:13.784914 | orchestrator | 2026-04-05 04:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:16.831443 | orchestrator | 2026-04-05 04:01:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:16.834500 | orchestrator | 2026-04-05 04:01:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:16.834561 | orchestrator | 2026-04-05 04:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:19.882242 | orchestrator | 2026-04-05 04:01:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:19.883948 | orchestrator | 2026-04-05 04:01:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:19.883992 | orchestrator | 2026-04-05 04:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:22.930279 | orchestrator | 2026-04-05 04:01:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:22.933008 | orchestrator | 2026-04-05 04:01:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:22.933273 | orchestrator | 2026-04-05 04:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:25.984511 | orchestrator | 2026-04-05 04:01:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:25.986191 | orchestrator | 2026-04-05 04:01:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:25.986240 | orchestrator | 2026-04-05 04:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:29.038312 | orchestrator | 2026-04-05 04:01:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:29.041395 | orchestrator | 2026-04-05 04:01:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:29.041480 | orchestrator | 2026-04-05 04:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:32.085768 | orchestrator | 2026-04-05 04:01:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:32.087738 | orchestrator | 2026-04-05 04:01:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:32.087808 | orchestrator | 2026-04-05 04:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:35.127197 | orchestrator | 2026-04-05 04:01:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:35.129301 | orchestrator | 2026-04-05 04:01:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:35.129362 | orchestrator | 2026-04-05 04:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:38.168984 | orchestrator | 2026-04-05 04:01:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:38.169775 | orchestrator | 2026-04-05 04:01:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:38.169809 | orchestrator | 2026-04-05 04:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:41.224437 | orchestrator | 2026-04-05 04:01:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:41.224949 | orchestrator | 2026-04-05 04:01:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:41.224985 | orchestrator | 2026-04-05 04:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:44.281130 | orchestrator | 2026-04-05 04:01:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:44.283375 | orchestrator | 2026-04-05 04:01:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:44.283475 | orchestrator | 2026-04-05 04:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:47.329247 | orchestrator | 2026-04-05 04:01:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:47.331172 | orchestrator | 2026-04-05 04:01:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:47.331235 | orchestrator | 2026-04-05 04:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:50.376106 | orchestrator | 2026-04-05 04:01:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:50.379051 | orchestrator | 2026-04-05 04:01:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:50.379144 | orchestrator | 2026-04-05 04:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:53.432139 | orchestrator | 2026-04-05 04:01:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:53.434089 | orchestrator | 2026-04-05 04:01:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:53.434151 | orchestrator | 2026-04-05 04:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:56.486427 | orchestrator | 2026-04-05 04:01:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:56.487329 | orchestrator | 2026-04-05 04:01:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:56.487416 | orchestrator | 2026-04-05 04:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:01:59.531760 | orchestrator | 2026-04-05 04:01:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:01:59.532825 | orchestrator | 2026-04-05 04:01:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:01:59.532872 | orchestrator | 2026-04-05 04:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:02.577056 | orchestrator | 2026-04-05 04:02:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:02.578267 | orchestrator | 2026-04-05 04:02:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:02.578349 | orchestrator | 2026-04-05 04:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:05.624502 | orchestrator | 2026-04-05 04:02:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:05.626542 | orchestrator | 2026-04-05 04:02:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:05.626601 | orchestrator | 2026-04-05 04:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:08.679427 | orchestrator | 2026-04-05 04:02:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:08.682632 | orchestrator | 2026-04-05 04:02:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:08.682867 | orchestrator | 2026-04-05 04:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:11.731745 | orchestrator | 2026-04-05 04:02:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:11.733431 | orchestrator | 2026-04-05 04:02:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:11.733483 | orchestrator | 2026-04-05 04:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:14.772209 | orchestrator | 2026-04-05 04:02:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:14.772951 | orchestrator | 2026-04-05 04:02:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:14.772994 | orchestrator | 2026-04-05 04:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:17.821798 | orchestrator | 2026-04-05 04:02:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:17.823598 | orchestrator | 2026-04-05 04:02:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:17.823657 | orchestrator | 2026-04-05 04:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:20.885901 | orchestrator | 2026-04-05 04:02:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:20.890010 | orchestrator | 2026-04-05 04:02:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:20.890170 | orchestrator | 2026-04-05 04:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:23.932291 | orchestrator | 2026-04-05 04:02:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:23.933563 | orchestrator | 2026-04-05 04:02:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:23.933600 | orchestrator | 2026-04-05 04:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:26.982140 | orchestrator | 2026-04-05 04:02:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:26.984284 | orchestrator | 2026-04-05 04:02:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:26.984345 | orchestrator | 2026-04-05 04:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:30.036222 | orchestrator | 2026-04-05 04:02:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:30.037848 | orchestrator | 2026-04-05 04:02:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:30.037911 | orchestrator | 2026-04-05 04:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:33.078676 | orchestrator | 2026-04-05 04:02:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:33.079763 | orchestrator | 2026-04-05 04:02:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:33.079803 | orchestrator | 2026-04-05 04:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:36.129872 | orchestrator | 2026-04-05 04:02:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:36.131156 | orchestrator | 2026-04-05 04:02:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:36.131216 | orchestrator | 2026-04-05 04:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:39.179210 | orchestrator | 2026-04-05 04:02:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:39.180072 | orchestrator | 2026-04-05 04:02:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:39.180112 | orchestrator | 2026-04-05 04:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:42.219407 | orchestrator | 2026-04-05 04:02:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:42.220242 | orchestrator | 2026-04-05 04:02:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:42.220307 | orchestrator | 2026-04-05 04:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:45.266844 | orchestrator | 2026-04-05 04:02:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:45.269556 | orchestrator | 2026-04-05 04:02:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:45.269639 | orchestrator | 2026-04-05 04:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:48.314114 | orchestrator | 2026-04-05 04:02:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:48.314752 | orchestrator | 2026-04-05 04:02:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:48.314803 | orchestrator | 2026-04-05 04:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:51.353414 | orchestrator | 2026-04-05 04:02:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:51.354090 | orchestrator | 2026-04-05 04:02:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:51.354135 | orchestrator | 2026-04-05 04:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:54.404620 | orchestrator | 2026-04-05 04:02:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:54.406414 | orchestrator | 2026-04-05 04:02:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:54.406566 | orchestrator | 2026-04-05 04:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:02:57.449923 | orchestrator | 2026-04-05 04:02:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:02:57.452917 | orchestrator | 2026-04-05 04:02:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:02:57.453012 | orchestrator | 2026-04-05 04:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:00.500174 | orchestrator | 2026-04-05 04:03:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:00.500893 | orchestrator | 2026-04-05 04:03:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:00.500927 | orchestrator | 2026-04-05 04:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:03.542954 | orchestrator | 2026-04-05 04:03:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:03.544202 | orchestrator | 2026-04-05 04:03:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:03.544248 | orchestrator | 2026-04-05 04:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:06.596252 | orchestrator | 2026-04-05 04:03:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:06.598497 | orchestrator | 2026-04-05 04:03:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:06.598612 | orchestrator | 2026-04-05 04:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:09.652119 | orchestrator | 2026-04-05 04:03:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:09.653553 | orchestrator | 2026-04-05 04:03:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:09.653642 | orchestrator | 2026-04-05 04:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:12.711591 | orchestrator | 2026-04-05 04:03:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:12.713625 | orchestrator | 2026-04-05 04:03:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:12.713715 | orchestrator | 2026-04-05 04:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:15.757302 | orchestrator | 2026-04-05 04:03:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:15.758978 | orchestrator | 2026-04-05 04:03:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:15.759025 | orchestrator | 2026-04-05 04:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:18.807373 | orchestrator | 2026-04-05 04:03:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:18.808200 | orchestrator | 2026-04-05 04:03:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:18.808240 | orchestrator | 2026-04-05 04:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:21.853176 | orchestrator | 2026-04-05 04:03:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:21.856044 | orchestrator | 2026-04-05 04:03:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:21.856134 | orchestrator | 2026-04-05 04:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:24.903888 | orchestrator | 2026-04-05 04:03:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:24.906123 | orchestrator | 2026-04-05 04:03:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:24.906201 | orchestrator | 2026-04-05 04:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:27.951156 | orchestrator | 2026-04-05 04:03:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:27.953122 | orchestrator | 2026-04-05 04:03:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:27.953176 | orchestrator | 2026-04-05 04:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:31.004306 | orchestrator | 2026-04-05 04:03:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:31.005161 | orchestrator | 2026-04-05 04:03:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:31.005247 | orchestrator | 2026-04-05 04:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:34.061252 | orchestrator | 2026-04-05 04:03:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:34.063153 | orchestrator | 2026-04-05 04:03:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:34.063277 | orchestrator | 2026-04-05 04:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:37.111939 | orchestrator | 2026-04-05 04:03:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:37.113873 | orchestrator | 2026-04-05 04:03:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:37.113923 | orchestrator | 2026-04-05 04:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:40.163569 | orchestrator | 2026-04-05 04:03:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:40.164492 | orchestrator | 2026-04-05 04:03:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:40.164557 | orchestrator | 2026-04-05 04:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:43.208300 | orchestrator | 2026-04-05 04:03:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:43.210396 | orchestrator | 2026-04-05 04:03:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:43.210451 | orchestrator | 2026-04-05 04:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:46.248242 | orchestrator | 2026-04-05 04:03:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:46.249596 | orchestrator | 2026-04-05 04:03:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:46.249645 | orchestrator | 2026-04-05 04:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:49.293044 | orchestrator | 2026-04-05 04:03:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:49.296386 | orchestrator | 2026-04-05 04:03:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:49.296467 | orchestrator | 2026-04-05 04:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:52.341256 | orchestrator | 2026-04-05 04:03:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:52.342117 | orchestrator | 2026-04-05 04:03:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:52.342162 | orchestrator | 2026-04-05 04:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:55.381147 | orchestrator | 2026-04-05 04:03:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:55.382952 | orchestrator | 2026-04-05 04:03:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:55.382999 | orchestrator | 2026-04-05 04:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:03:58.425802 | orchestrator | 2026-04-05 04:03:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:03:58.427812 | orchestrator | 2026-04-05 04:03:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:03:58.427860 | orchestrator | 2026-04-05 04:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:01.469249 | orchestrator | 2026-04-05 04:04:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:01.471923 | orchestrator | 2026-04-05 04:04:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:01.471977 | orchestrator | 2026-04-05 04:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:04.520585 | orchestrator | 2026-04-05 04:04:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:04.521957 | orchestrator | 2026-04-05 04:04:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:04.522177 | orchestrator | 2026-04-05 04:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:07.567920 | orchestrator | 2026-04-05 04:04:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:07.569616 | orchestrator | 2026-04-05 04:04:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:07.569677 | orchestrator | 2026-04-05 04:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:10.617356 | orchestrator | 2026-04-05 04:04:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:10.618157 | orchestrator | 2026-04-05 04:04:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:10.618209 | orchestrator | 2026-04-05 04:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:13.668100 | orchestrator | 2026-04-05 04:04:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:13.670403 | orchestrator | 2026-04-05 04:04:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:13.670499 | orchestrator | 2026-04-05 04:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:16.716225 | orchestrator | 2026-04-05 04:04:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:16.719244 | orchestrator | 2026-04-05 04:04:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:16.719322 | orchestrator | 2026-04-05 04:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:19.772218 | orchestrator | 2026-04-05 04:04:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:19.773962 | orchestrator | 2026-04-05 04:04:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:19.774135 | orchestrator | 2026-04-05 04:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:22.826105 | orchestrator | 2026-04-05 04:04:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:22.827222 | orchestrator | 2026-04-05 04:04:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:22.827262 | orchestrator | 2026-04-05 04:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:25.872535 | orchestrator | 2026-04-05 04:04:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:25.875096 | orchestrator | 2026-04-05 04:04:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:25.875143 | orchestrator | 2026-04-05 04:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:28.922911 | orchestrator | 2026-04-05 04:04:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:28.924934 | orchestrator | 2026-04-05 04:04:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:28.924983 | orchestrator | 2026-04-05 04:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:31.966456 | orchestrator | 2026-04-05 04:04:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:31.967897 | orchestrator | 2026-04-05 04:04:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:31.967944 | orchestrator | 2026-04-05 04:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:35.011375 | orchestrator | 2026-04-05 04:04:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:35.013192 | orchestrator | 2026-04-05 04:04:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:35.013260 | orchestrator | 2026-04-05 04:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:38.058320 | orchestrator | 2026-04-05 04:04:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:38.059386 | orchestrator | 2026-04-05 04:04:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:38.059461 | orchestrator | 2026-04-05 04:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:41.101031 | orchestrator | 2026-04-05 04:04:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:41.102553 | orchestrator | 2026-04-05 04:04:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:41.102635 | orchestrator | 2026-04-05 04:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:44.150399 | orchestrator | 2026-04-05 04:04:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:44.153227 | orchestrator | 2026-04-05 04:04:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:44.153301 | orchestrator | 2026-04-05 04:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:47.199137 | orchestrator | 2026-04-05 04:04:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:47.199665 | orchestrator | 2026-04-05 04:04:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:47.199713 | orchestrator | 2026-04-05 04:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:50.244339 | orchestrator | 2026-04-05 04:04:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:50.245595 | orchestrator | 2026-04-05 04:04:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:50.245631 | orchestrator | 2026-04-05 04:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:53.289500 | orchestrator | 2026-04-05 04:04:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:53.291779 | orchestrator | 2026-04-05 04:04:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:53.291838 | orchestrator | 2026-04-05 04:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:56.332217 | orchestrator | 2026-04-05 04:04:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:56.332723 | orchestrator | 2026-04-05 04:04:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:56.332787 | orchestrator | 2026-04-05 04:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:04:59.381642 | orchestrator | 2026-04-05 04:04:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:04:59.383480 | orchestrator | 2026-04-05 04:04:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:04:59.383908 | orchestrator | 2026-04-05 04:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:02.426610 | orchestrator | 2026-04-05 04:05:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:02.428280 | orchestrator | 2026-04-05 04:05:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:02.428307 | orchestrator | 2026-04-05 04:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:05.470612 | orchestrator | 2026-04-05 04:05:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:05.471533 | orchestrator | 2026-04-05 04:05:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:05.471575 | orchestrator | 2026-04-05 04:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:08.525112 | orchestrator | 2026-04-05 04:05:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:08.527517 | orchestrator | 2026-04-05 04:05:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:08.527582 | orchestrator | 2026-04-05 04:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:11.569243 | orchestrator | 2026-04-05 04:05:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:11.572120 | orchestrator | 2026-04-05 04:05:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:11.572212 | orchestrator | 2026-04-05 04:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:14.614381 | orchestrator | 2026-04-05 04:05:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:14.616158 | orchestrator | 2026-04-05 04:05:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:14.616212 | orchestrator | 2026-04-05 04:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:17.656570 | orchestrator | 2026-04-05 04:05:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:17.659127 | orchestrator | 2026-04-05 04:05:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:17.659188 | orchestrator | 2026-04-05 04:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:20.696325 | orchestrator | 2026-04-05 04:05:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:20.698531 | orchestrator | 2026-04-05 04:05:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:20.698587 | orchestrator | 2026-04-05 04:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:23.748265 | orchestrator | 2026-04-05 04:05:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:23.749577 | orchestrator | 2026-04-05 04:05:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:23.749621 | orchestrator | 2026-04-05 04:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:26.795161 | orchestrator | 2026-04-05 04:05:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:26.796933 | orchestrator | 2026-04-05 04:05:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:26.796983 | orchestrator | 2026-04-05 04:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:29.843640 | orchestrator | 2026-04-05 04:05:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:29.845544 | orchestrator | 2026-04-05 04:05:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:29.845738 | orchestrator | 2026-04-05 04:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:32.898498 | orchestrator | 2026-04-05 04:05:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:32.900768 | orchestrator | 2026-04-05 04:05:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:32.900807 | orchestrator | 2026-04-05 04:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:35.949514 | orchestrator | 2026-04-05 04:05:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:35.951342 | orchestrator | 2026-04-05 04:05:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:35.951394 | orchestrator | 2026-04-05 04:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:39.006513 | orchestrator | 2026-04-05 04:05:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:39.008384 | orchestrator | 2026-04-05 04:05:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:39.008447 | orchestrator | 2026-04-05 04:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:42.057624 | orchestrator | 2026-04-05 04:05:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:42.059686 | orchestrator | 2026-04-05 04:05:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:42.059759 | orchestrator | 2026-04-05 04:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:45.101179 | orchestrator | 2026-04-05 04:05:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:45.102277 | orchestrator | 2026-04-05 04:05:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:45.102337 | orchestrator | 2026-04-05 04:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:48.147668 | orchestrator | 2026-04-05 04:05:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:48.148886 | orchestrator | 2026-04-05 04:05:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:48.148929 | orchestrator | 2026-04-05 04:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:51.194230 | orchestrator | 2026-04-05 04:05:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:51.196106 | orchestrator | 2026-04-05 04:05:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:51.196162 | orchestrator | 2026-04-05 04:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:54.254202 | orchestrator | 2026-04-05 04:05:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:54.255821 | orchestrator | 2026-04-05 04:05:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:54.255904 | orchestrator | 2026-04-05 04:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:05:57.305668 | orchestrator | 2026-04-05 04:05:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:05:57.308918 | orchestrator | 2026-04-05 04:05:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:05:57.308993 | orchestrator | 2026-04-05 04:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:00.348820 | orchestrator | 2026-04-05 04:06:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:00.350269 | orchestrator | 2026-04-05 04:06:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:00.350334 | orchestrator | 2026-04-05 04:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:03.404711 | orchestrator | 2026-04-05 04:06:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:03.405967 | orchestrator | 2026-04-05 04:06:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:03.406217 | orchestrator | 2026-04-05 04:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:06.452814 | orchestrator | 2026-04-05 04:06:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:06.454687 | orchestrator | 2026-04-05 04:06:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:06.454747 | orchestrator | 2026-04-05 04:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:09.510972 | orchestrator | 2026-04-05 04:06:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:09.512331 | orchestrator | 2026-04-05 04:06:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:09.512374 | orchestrator | 2026-04-05 04:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:12.549063 | orchestrator | 2026-04-05 04:06:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:12.549282 | orchestrator | 2026-04-05 04:06:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:12.549308 | orchestrator | 2026-04-05 04:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:15.586260 | orchestrator | 2026-04-05 04:06:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:15.587387 | orchestrator | 2026-04-05 04:06:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:15.587417 | orchestrator | 2026-04-05 04:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:18.626639 | orchestrator | 2026-04-05 04:06:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:18.628616 | orchestrator | 2026-04-05 04:06:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:18.628659 | orchestrator | 2026-04-05 04:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:21.676176 | orchestrator | 2026-04-05 04:06:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:21.678166 | orchestrator | 2026-04-05 04:06:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:21.678229 | orchestrator | 2026-04-05 04:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:24.730253 | orchestrator | 2026-04-05 04:06:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:24.732500 | orchestrator | 2026-04-05 04:06:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:24.732585 | orchestrator | 2026-04-05 04:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:27.773319 | orchestrator | 2026-04-05 04:06:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:27.775333 | orchestrator | 2026-04-05 04:06:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:27.775400 | orchestrator | 2026-04-05 04:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:30.817390 | orchestrator | 2026-04-05 04:06:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:30.819106 | orchestrator | 2026-04-05 04:06:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:30.819205 | orchestrator | 2026-04-05 04:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:33.863359 | orchestrator | 2026-04-05 04:06:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:33.864239 | orchestrator | 2026-04-05 04:06:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:33.864299 | orchestrator | 2026-04-05 04:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:36.910800 | orchestrator | 2026-04-05 04:06:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:36.912577 | orchestrator | 2026-04-05 04:06:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:36.912628 | orchestrator | 2026-04-05 04:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:39.961956 | orchestrator | 2026-04-05 04:06:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:39.965425 | orchestrator | 2026-04-05 04:06:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:39.965546 | orchestrator | 2026-04-05 04:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:43.018429 | orchestrator | 2026-04-05 04:06:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:43.020573 | orchestrator | 2026-04-05 04:06:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:43.020651 | orchestrator | 2026-04-05 04:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:46.062714 | orchestrator | 2026-04-05 04:06:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:46.064230 | orchestrator | 2026-04-05 04:06:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:46.064277 | orchestrator | 2026-04-05 04:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:49.113664 | orchestrator | 2026-04-05 04:06:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:49.115738 | orchestrator | 2026-04-05 04:06:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:49.115791 | orchestrator | 2026-04-05 04:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:52.162560 | orchestrator | 2026-04-05 04:06:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:52.164457 | orchestrator | 2026-04-05 04:06:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:52.164494 | orchestrator | 2026-04-05 04:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:55.208428 | orchestrator | 2026-04-05 04:06:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:55.209784 | orchestrator | 2026-04-05 04:06:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:55.209826 | orchestrator | 2026-04-05 04:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:06:58.257664 | orchestrator | 2026-04-05 04:06:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:06:58.259514 | orchestrator | 2026-04-05 04:06:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:06:58.259570 | orchestrator | 2026-04-05 04:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:01.305169 | orchestrator | 2026-04-05 04:07:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:01.306948 | orchestrator | 2026-04-05 04:07:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:01.306995 | orchestrator | 2026-04-05 04:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:04.349221 | orchestrator | 2026-04-05 04:07:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:04.351288 | orchestrator | 2026-04-05 04:07:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:04.351335 | orchestrator | 2026-04-05 04:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:07.389012 | orchestrator | 2026-04-05 04:07:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:07.390274 | orchestrator | 2026-04-05 04:07:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:07.390335 | orchestrator | 2026-04-05 04:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:10.438616 | orchestrator | 2026-04-05 04:07:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:10.439304 | orchestrator | 2026-04-05 04:07:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:10.439356 | orchestrator | 2026-04-05 04:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:13.487128 | orchestrator | 2026-04-05 04:07:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:13.489384 | orchestrator | 2026-04-05 04:07:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:13.489455 | orchestrator | 2026-04-05 04:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:16.530238 | orchestrator | 2026-04-05 04:07:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:16.532097 | orchestrator | 2026-04-05 04:07:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:16.532138 | orchestrator | 2026-04-05 04:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:19.568274 | orchestrator | 2026-04-05 04:07:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:19.568898 | orchestrator | 2026-04-05 04:07:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:19.568936 | orchestrator | 2026-04-05 04:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:22.609198 | orchestrator | 2026-04-05 04:07:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:22.609484 | orchestrator | 2026-04-05 04:07:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:22.609514 | orchestrator | 2026-04-05 04:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:25.662176 | orchestrator | 2026-04-05 04:07:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:25.663244 | orchestrator | 2026-04-05 04:07:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:25.663317 | orchestrator | 2026-04-05 04:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:28.712850 | orchestrator | 2026-04-05 04:07:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:28.714574 | orchestrator | 2026-04-05 04:07:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:28.714624 | orchestrator | 2026-04-05 04:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:31.763644 | orchestrator | 2026-04-05 04:07:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:31.764641 | orchestrator | 2026-04-05 04:07:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:31.764944 | orchestrator | 2026-04-05 04:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:34.813243 | orchestrator | 2026-04-05 04:07:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:34.814977 | orchestrator | 2026-04-05 04:07:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:34.815025 | orchestrator | 2026-04-05 04:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:37.853102 | orchestrator | 2026-04-05 04:07:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:37.853701 | orchestrator | 2026-04-05 04:07:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:37.854070 | orchestrator | 2026-04-05 04:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:40.905268 | orchestrator | 2026-04-05 04:07:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:40.906312 | orchestrator | 2026-04-05 04:07:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:40.906364 | orchestrator | 2026-04-05 04:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:43.954439 | orchestrator | 2026-04-05 04:07:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:43.958919 | orchestrator | 2026-04-05 04:07:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:43.959016 | orchestrator | 2026-04-05 04:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:46.997301 | orchestrator | 2026-04-05 04:07:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:46.999244 | orchestrator | 2026-04-05 04:07:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:46.999317 | orchestrator | 2026-04-05 04:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:50.040486 | orchestrator | 2026-04-05 04:07:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:50.042119 | orchestrator | 2026-04-05 04:07:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:50.042205 | orchestrator | 2026-04-05 04:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:53.089174 | orchestrator | 2026-04-05 04:07:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:53.090227 | orchestrator | 2026-04-05 04:07:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:53.090291 | orchestrator | 2026-04-05 04:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:56.126281 | orchestrator | 2026-04-05 04:07:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:56.127120 | orchestrator | 2026-04-05 04:07:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:56.127157 | orchestrator | 2026-04-05 04:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:07:59.181474 | orchestrator | 2026-04-05 04:07:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:07:59.183630 | orchestrator | 2026-04-05 04:07:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:07:59.183670 | orchestrator | 2026-04-05 04:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:02.230733 | orchestrator | 2026-04-05 04:08:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:02.232967 | orchestrator | 2026-04-05 04:08:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:02.233104 | orchestrator | 2026-04-05 04:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:05.285404 | orchestrator | 2026-04-05 04:08:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:05.287263 | orchestrator | 2026-04-05 04:08:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:05.287439 | orchestrator | 2026-04-05 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:08.328349 | orchestrator | 2026-04-05 04:08:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:08.329614 | orchestrator | 2026-04-05 04:08:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:08.329681 | orchestrator | 2026-04-05 04:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:11.381968 | orchestrator | 2026-04-05 04:08:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:11.383539 | orchestrator | 2026-04-05 04:08:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:11.383582 | orchestrator | 2026-04-05 04:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:14.434779 | orchestrator | 2026-04-05 04:08:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:14.437741 | orchestrator | 2026-04-05 04:08:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:14.437827 | orchestrator | 2026-04-05 04:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:17.481327 | orchestrator | 2026-04-05 04:08:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:17.483702 | orchestrator | 2026-04-05 04:08:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:17.483788 | orchestrator | 2026-04-05 04:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:20.535120 | orchestrator | 2026-04-05 04:08:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:20.535851 | orchestrator | 2026-04-05 04:08:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:20.535874 | orchestrator | 2026-04-05 04:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:23.582991 | orchestrator | 2026-04-05 04:08:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:23.584954 | orchestrator | 2026-04-05 04:08:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:23.585018 | orchestrator | 2026-04-05 04:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:26.627617 | orchestrator | 2026-04-05 04:08:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:26.629488 | orchestrator | 2026-04-05 04:08:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:26.629549 | orchestrator | 2026-04-05 04:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:29.672031 | orchestrator | 2026-04-05 04:08:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:29.673409 | orchestrator | 2026-04-05 04:08:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:29.673542 | orchestrator | 2026-04-05 04:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:32.719965 | orchestrator | 2026-04-05 04:08:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:32.721554 | orchestrator | 2026-04-05 04:08:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:32.721622 | orchestrator | 2026-04-05 04:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:35.766188 | orchestrator | 2026-04-05 04:08:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:35.768707 | orchestrator | 2026-04-05 04:08:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:35.768790 | orchestrator | 2026-04-05 04:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:38.823057 | orchestrator | 2026-04-05 04:08:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:38.824868 | orchestrator | 2026-04-05 04:08:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:38.825025 | orchestrator | 2026-04-05 04:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:41.875439 | orchestrator | 2026-04-05 04:08:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:41.876720 | orchestrator | 2026-04-05 04:08:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:41.876771 | orchestrator | 2026-04-05 04:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:44.930361 | orchestrator | 2026-04-05 04:08:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:44.932156 | orchestrator | 2026-04-05 04:08:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:44.932218 | orchestrator | 2026-04-05 04:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:47.981201 | orchestrator | 2026-04-05 04:08:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:47.982628 | orchestrator | 2026-04-05 04:08:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:47.982671 | orchestrator | 2026-04-05 04:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:51.028737 | orchestrator | 2026-04-05 04:08:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:51.030502 | orchestrator | 2026-04-05 04:08:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:51.030563 | orchestrator | 2026-04-05 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:54.074789 | orchestrator | 2026-04-05 04:08:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:54.076033 | orchestrator | 2026-04-05 04:08:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:54.076081 | orchestrator | 2026-04-05 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:08:57.114440 | orchestrator | 2026-04-05 04:08:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:08:57.115249 | orchestrator | 2026-04-05 04:08:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:08:57.115278 | orchestrator | 2026-04-05 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:00.161472 | orchestrator | 2026-04-05 04:09:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:00.164475 | orchestrator | 2026-04-05 04:09:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:00.164577 | orchestrator | 2026-04-05 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:03.210498 | orchestrator | 2026-04-05 04:09:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:03.213319 | orchestrator | 2026-04-05 04:09:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:03.213417 | orchestrator | 2026-04-05 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:06.258663 | orchestrator | 2026-04-05 04:09:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:06.260645 | orchestrator | 2026-04-05 04:09:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:06.260691 | orchestrator | 2026-04-05 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:09.313368 | orchestrator | 2026-04-05 04:09:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:09.315063 | orchestrator | 2026-04-05 04:09:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:09.315117 | orchestrator | 2026-04-05 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:12.369468 | orchestrator | 2026-04-05 04:09:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:12.370650 | orchestrator | 2026-04-05 04:09:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:12.370697 | orchestrator | 2026-04-05 04:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:15.431768 | orchestrator | 2026-04-05 04:09:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:15.437141 | orchestrator | 2026-04-05 04:09:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:15.437218 | orchestrator | 2026-04-05 04:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:18.481145 | orchestrator | 2026-04-05 04:09:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:18.483789 | orchestrator | 2026-04-05 04:09:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:18.483843 | orchestrator | 2026-04-05 04:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:21.523826 | orchestrator | 2026-04-05 04:09:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:21.526223 | orchestrator | 2026-04-05 04:09:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:21.526265 | orchestrator | 2026-04-05 04:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:24.573210 | orchestrator | 2026-04-05 04:09:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:24.576350 | orchestrator | 2026-04-05 04:09:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:24.576406 | orchestrator | 2026-04-05 04:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:27.622327 | orchestrator | 2026-04-05 04:09:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:27.624137 | orchestrator | 2026-04-05 04:09:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:27.624305 | orchestrator | 2026-04-05 04:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:30.675541 | orchestrator | 2026-04-05 04:09:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:30.677186 | orchestrator | 2026-04-05 04:09:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:30.677363 | orchestrator | 2026-04-05 04:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:33.722472 | orchestrator | 2026-04-05 04:09:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:33.724996 | orchestrator | 2026-04-05 04:09:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:33.725050 | orchestrator | 2026-04-05 04:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:36.771763 | orchestrator | 2026-04-05 04:09:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:36.773366 | orchestrator | 2026-04-05 04:09:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:36.773431 | orchestrator | 2026-04-05 04:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:39.824116 | orchestrator | 2026-04-05 04:09:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:39.826311 | orchestrator | 2026-04-05 04:09:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:39.826361 | orchestrator | 2026-04-05 04:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:42.875688 | orchestrator | 2026-04-05 04:09:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:42.878375 | orchestrator | 2026-04-05 04:09:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:42.878472 | orchestrator | 2026-04-05 04:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:45.921413 | orchestrator | 2026-04-05 04:09:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:45.923670 | orchestrator | 2026-04-05 04:09:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:45.924084 | orchestrator | 2026-04-05 04:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:48.975548 | orchestrator | 2026-04-05 04:09:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:48.978387 | orchestrator | 2026-04-05 04:09:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:48.978554 | orchestrator | 2026-04-05 04:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:52.028195 | orchestrator | 2026-04-05 04:09:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:52.032580 | orchestrator | 2026-04-05 04:09:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:52.032674 | orchestrator | 2026-04-05 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:55.073359 | orchestrator | 2026-04-05 04:09:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:55.074845 | orchestrator | 2026-04-05 04:09:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:55.074926 | orchestrator | 2026-04-05 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:09:58.109599 | orchestrator | 2026-04-05 04:09:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:09:58.111447 | orchestrator | 2026-04-05 04:09:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:09:58.111479 | orchestrator | 2026-04-05 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:01.156320 | orchestrator | 2026-04-05 04:10:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:01.157074 | orchestrator | 2026-04-05 04:10:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:01.157142 | orchestrator | 2026-04-05 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:04.211814 | orchestrator | 2026-04-05 04:10:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:04.215617 | orchestrator | 2026-04-05 04:10:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:04.215704 | orchestrator | 2026-04-05 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:07.264293 | orchestrator | 2026-04-05 04:10:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:07.266072 | orchestrator | 2026-04-05 04:10:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:07.266115 | orchestrator | 2026-04-05 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:10.312071 | orchestrator | 2026-04-05 04:10:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:10.314441 | orchestrator | 2026-04-05 04:10:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:10.314517 | orchestrator | 2026-04-05 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:13.362764 | orchestrator | 2026-04-05 04:10:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:13.364261 | orchestrator | 2026-04-05 04:10:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:13.364308 | orchestrator | 2026-04-05 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:16.407485 | orchestrator | 2026-04-05 04:10:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:16.409442 | orchestrator | 2026-04-05 04:10:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:16.409523 | orchestrator | 2026-04-05 04:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:19.459175 | orchestrator | 2026-04-05 04:10:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:19.460939 | orchestrator | 2026-04-05 04:10:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:19.461017 | orchestrator | 2026-04-05 04:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:22.504123 | orchestrator | 2026-04-05 04:10:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:22.504962 | orchestrator | 2026-04-05 04:10:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:22.505048 | orchestrator | 2026-04-05 04:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:25.547960 | orchestrator | 2026-04-05 04:10:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:25.548477 | orchestrator | 2026-04-05 04:10:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:25.548518 | orchestrator | 2026-04-05 04:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:28.589061 | orchestrator | 2026-04-05 04:10:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:28.590269 | orchestrator | 2026-04-05 04:10:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:28.590311 | orchestrator | 2026-04-05 04:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:31.633488 | orchestrator | 2026-04-05 04:10:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:31.635180 | orchestrator | 2026-04-05 04:10:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:31.635305 | orchestrator | 2026-04-05 04:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:34.683884 | orchestrator | 2026-04-05 04:10:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:34.685612 | orchestrator | 2026-04-05 04:10:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:34.685641 | orchestrator | 2026-04-05 04:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:37.742485 | orchestrator | 2026-04-05 04:10:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:37.744222 | orchestrator | 2026-04-05 04:10:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:37.744284 | orchestrator | 2026-04-05 04:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:40.789850 | orchestrator | 2026-04-05 04:10:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:40.791666 | orchestrator | 2026-04-05 04:10:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:40.791736 | orchestrator | 2026-04-05 04:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:43.845435 | orchestrator | 2026-04-05 04:10:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:43.846132 | orchestrator | 2026-04-05 04:10:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:43.846154 | orchestrator | 2026-04-05 04:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:46.893463 | orchestrator | 2026-04-05 04:10:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:46.895757 | orchestrator | 2026-04-05 04:10:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:46.895828 | orchestrator | 2026-04-05 04:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:49.944014 | orchestrator | 2026-04-05 04:10:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:49.945637 | orchestrator | 2026-04-05 04:10:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:49.945682 | orchestrator | 2026-04-05 04:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:52.992415 | orchestrator | 2026-04-05 04:10:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:52.994552 | orchestrator | 2026-04-05 04:10:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:52.994605 | orchestrator | 2026-04-05 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:56.041976 | orchestrator | 2026-04-05 04:10:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:56.044060 | orchestrator | 2026-04-05 04:10:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:56.044113 | orchestrator | 2026-04-05 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:10:59.087787 | orchestrator | 2026-04-05 04:10:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:10:59.088103 | orchestrator | 2026-04-05 04:10:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:10:59.088153 | orchestrator | 2026-04-05 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:02.132817 | orchestrator | 2026-04-05 04:11:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:02.134282 | orchestrator | 2026-04-05 04:11:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:02.134346 | orchestrator | 2026-04-05 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:05.182693 | orchestrator | 2026-04-05 04:11:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:05.186176 | orchestrator | 2026-04-05 04:11:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:05.186295 | orchestrator | 2026-04-05 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:08.230602 | orchestrator | 2026-04-05 04:11:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:08.232541 | orchestrator | 2026-04-05 04:11:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:08.232586 | orchestrator | 2026-04-05 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:11.282559 | orchestrator | 2026-04-05 04:11:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:11.284239 | orchestrator | 2026-04-05 04:11:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:11.284298 | orchestrator | 2026-04-05 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:14.333538 | orchestrator | 2026-04-05 04:11:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:14.335447 | orchestrator | 2026-04-05 04:11:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:14.335496 | orchestrator | 2026-04-05 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:17.382906 | orchestrator | 2026-04-05 04:11:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:17.384048 | orchestrator | 2026-04-05 04:11:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:17.384093 | orchestrator | 2026-04-05 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:20.436389 | orchestrator | 2026-04-05 04:11:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:20.437779 | orchestrator | 2026-04-05 04:11:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:20.437829 | orchestrator | 2026-04-05 04:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:23.484190 | orchestrator | 2026-04-05 04:11:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:23.485097 | orchestrator | 2026-04-05 04:11:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:23.485172 | orchestrator | 2026-04-05 04:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:26.530680 | orchestrator | 2026-04-05 04:11:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:26.533567 | orchestrator | 2026-04-05 04:11:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:26.533617 | orchestrator | 2026-04-05 04:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:29.582991 | orchestrator | 2026-04-05 04:11:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:29.585685 | orchestrator | 2026-04-05 04:11:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:29.585740 | orchestrator | 2026-04-05 04:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:32.628653 | orchestrator | 2026-04-05 04:11:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:32.631273 | orchestrator | 2026-04-05 04:11:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:32.631358 | orchestrator | 2026-04-05 04:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:35.680281 | orchestrator | 2026-04-05 04:11:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:35.681887 | orchestrator | 2026-04-05 04:11:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:35.681992 | orchestrator | 2026-04-05 04:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:38.737581 | orchestrator | 2026-04-05 04:11:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:38.739365 | orchestrator | 2026-04-05 04:11:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:38.739414 | orchestrator | 2026-04-05 04:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:41.781851 | orchestrator | 2026-04-05 04:11:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:41.782814 | orchestrator | 2026-04-05 04:11:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:41.782845 | orchestrator | 2026-04-05 04:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:44.833505 | orchestrator | 2026-04-05 04:11:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:44.834410 | orchestrator | 2026-04-05 04:11:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:44.834503 | orchestrator | 2026-04-05 04:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:47.885277 | orchestrator | 2026-04-05 04:11:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:47.887449 | orchestrator | 2026-04-05 04:11:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:47.887517 | orchestrator | 2026-04-05 04:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:50.935649 | orchestrator | 2026-04-05 04:11:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:50.938192 | orchestrator | 2026-04-05 04:11:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:50.938268 | orchestrator | 2026-04-05 04:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:53.984136 | orchestrator | 2026-04-05 04:11:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:53.987520 | orchestrator | 2026-04-05 04:11:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:53.987590 | orchestrator | 2026-04-05 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:11:57.031443 | orchestrator | 2026-04-05 04:11:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:11:57.032796 | orchestrator | 2026-04-05 04:11:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:11:57.032865 | orchestrator | 2026-04-05 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:00.078262 | orchestrator | 2026-04-05 04:12:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:00.079885 | orchestrator | 2026-04-05 04:12:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:00.079994 | orchestrator | 2026-04-05 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:03.117778 | orchestrator | 2026-04-05 04:12:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:03.118893 | orchestrator | 2026-04-05 04:12:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:03.118938 | orchestrator | 2026-04-05 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:06.163326 | orchestrator | 2026-04-05 04:12:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:06.164959 | orchestrator | 2026-04-05 04:12:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:06.165020 | orchestrator | 2026-04-05 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:09.199463 | orchestrator | 2026-04-05 04:12:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:09.201951 | orchestrator | 2026-04-05 04:12:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:09.202000 | orchestrator | 2026-04-05 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:12.247194 | orchestrator | 2026-04-05 04:12:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:12.248795 | orchestrator | 2026-04-05 04:12:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:12.248838 | orchestrator | 2026-04-05 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:15.305004 | orchestrator | 2026-04-05 04:12:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:15.306399 | orchestrator | 2026-04-05 04:12:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:15.306453 | orchestrator | 2026-04-05 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:18.352880 | orchestrator | 2026-04-05 04:12:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:18.355138 | orchestrator | 2026-04-05 04:12:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:18.355234 | orchestrator | 2026-04-05 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:21.400694 | orchestrator | 2026-04-05 04:12:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:21.403435 | orchestrator | 2026-04-05 04:12:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:21.403537 | orchestrator | 2026-04-05 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:24.448160 | orchestrator | 2026-04-05 04:12:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:24.450200 | orchestrator | 2026-04-05 04:12:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:24.450249 | orchestrator | 2026-04-05 04:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:27.488957 | orchestrator | 2026-04-05 04:12:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:27.491823 | orchestrator | 2026-04-05 04:12:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:27.491886 | orchestrator | 2026-04-05 04:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:30.538691 | orchestrator | 2026-04-05 04:12:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:30.539730 | orchestrator | 2026-04-05 04:12:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:30.539765 | orchestrator | 2026-04-05 04:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:33.589634 | orchestrator | 2026-04-05 04:12:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:33.590853 | orchestrator | 2026-04-05 04:12:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:33.590986 | orchestrator | 2026-04-05 04:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:36.644537 | orchestrator | 2026-04-05 04:12:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:36.647145 | orchestrator | 2026-04-05 04:12:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:36.647267 | orchestrator | 2026-04-05 04:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:39.683287 | orchestrator | 2026-04-05 04:12:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:39.683556 | orchestrator | 2026-04-05 04:12:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:39.683585 | orchestrator | 2026-04-05 04:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:42.726753 | orchestrator | 2026-04-05 04:12:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:42.728685 | orchestrator | 2026-04-05 04:12:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:42.728723 | orchestrator | 2026-04-05 04:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:45.768415 | orchestrator | 2026-04-05 04:12:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:45.770797 | orchestrator | 2026-04-05 04:12:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:45.770877 | orchestrator | 2026-04-05 04:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:48.817216 | orchestrator | 2026-04-05 04:12:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:48.818584 | orchestrator | 2026-04-05 04:12:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:48.818637 | orchestrator | 2026-04-05 04:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:51.875282 | orchestrator | 2026-04-05 04:12:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:51.876412 | orchestrator | 2026-04-05 04:12:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:51.876507 | orchestrator | 2026-04-05 04:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:54.926302 | orchestrator | 2026-04-05 04:12:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:54.926600 | orchestrator | 2026-04-05 04:12:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:54.926617 | orchestrator | 2026-04-05 04:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:12:57.981029 | orchestrator | 2026-04-05 04:12:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:12:57.983008 | orchestrator | 2026-04-05 04:12:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:12:57.983059 | orchestrator | 2026-04-05 04:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:01.029487 | orchestrator | 2026-04-05 04:13:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:01.031238 | orchestrator | 2026-04-05 04:13:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:01.031295 | orchestrator | 2026-04-05 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:04.082131 | orchestrator | 2026-04-05 04:13:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:04.083440 | orchestrator | 2026-04-05 04:13:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:04.083484 | orchestrator | 2026-04-05 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:07.132563 | orchestrator | 2026-04-05 04:13:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:07.133665 | orchestrator | 2026-04-05 04:13:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:07.133738 | orchestrator | 2026-04-05 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:10.189959 | orchestrator | 2026-04-05 04:13:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:10.191828 | orchestrator | 2026-04-05 04:13:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:10.191873 | orchestrator | 2026-04-05 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:13.236505 | orchestrator | 2026-04-05 04:13:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:13.238549 | orchestrator | 2026-04-05 04:13:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:13.238630 | orchestrator | 2026-04-05 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:16.281423 | orchestrator | 2026-04-05 04:13:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:16.283282 | orchestrator | 2026-04-05 04:13:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:16.283323 | orchestrator | 2026-04-05 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:19.330446 | orchestrator | 2026-04-05 04:13:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:19.331433 | orchestrator | 2026-04-05 04:13:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:19.331503 | orchestrator | 2026-04-05 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:22.384513 | orchestrator | 2026-04-05 04:13:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:22.388694 | orchestrator | 2026-04-05 04:13:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:22.388793 | orchestrator | 2026-04-05 04:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:25.440010 | orchestrator | 2026-04-05 04:13:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:25.442926 | orchestrator | 2026-04-05 04:13:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:25.442986 | orchestrator | 2026-04-05 04:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:28.486495 | orchestrator | 2026-04-05 04:13:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:28.490808 | orchestrator | 2026-04-05 04:13:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:28.490874 | orchestrator | 2026-04-05 04:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:31.532668 | orchestrator | 2026-04-05 04:13:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:31.534438 | orchestrator | 2026-04-05 04:13:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:31.534490 | orchestrator | 2026-04-05 04:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:34.586339 | orchestrator | 2026-04-05 04:13:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:34.588597 | orchestrator | 2026-04-05 04:13:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:34.588664 | orchestrator | 2026-04-05 04:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:37.634369 | orchestrator | 2026-04-05 04:13:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:37.636881 | orchestrator | 2026-04-05 04:13:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:37.637017 | orchestrator | 2026-04-05 04:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:40.694170 | orchestrator | 2026-04-05 04:13:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:40.696755 | orchestrator | 2026-04-05 04:13:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:40.696821 | orchestrator | 2026-04-05 04:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:43.745365 | orchestrator | 2026-04-05 04:13:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:43.748797 | orchestrator | 2026-04-05 04:13:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:43.748892 | orchestrator | 2026-04-05 04:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:46.793825 | orchestrator | 2026-04-05 04:13:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:46.796650 | orchestrator | 2026-04-05 04:13:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:46.796707 | orchestrator | 2026-04-05 04:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:49.843440 | orchestrator | 2026-04-05 04:13:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:49.848802 | orchestrator | 2026-04-05 04:13:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:49.848890 | orchestrator | 2026-04-05 04:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:52.891291 | orchestrator | 2026-04-05 04:13:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:52.892618 | orchestrator | 2026-04-05 04:13:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:52.892670 | orchestrator | 2026-04-05 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:55.930964 | orchestrator | 2026-04-05 04:13:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:55.931644 | orchestrator | 2026-04-05 04:13:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:55.931683 | orchestrator | 2026-04-05 04:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:13:58.982308 | orchestrator | 2026-04-05 04:13:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:13:58.983662 | orchestrator | 2026-04-05 04:13:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:13:58.983695 | orchestrator | 2026-04-05 04:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:02.033050 | orchestrator | 2026-04-05 04:14:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:02.037238 | orchestrator | 2026-04-05 04:14:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:02.037330 | orchestrator | 2026-04-05 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:05.085675 | orchestrator | 2026-04-05 04:14:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:05.089284 | orchestrator | 2026-04-05 04:14:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:05.089397 | orchestrator | 2026-04-05 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:08.134523 | orchestrator | 2026-04-05 04:14:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:08.136677 | orchestrator | 2026-04-05 04:14:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:08.136739 | orchestrator | 2026-04-05 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:11.188070 | orchestrator | 2026-04-05 04:14:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:11.190625 | orchestrator | 2026-04-05 04:14:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:11.190665 | orchestrator | 2026-04-05 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:14.243637 | orchestrator | 2026-04-05 04:14:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:14.247009 | orchestrator | 2026-04-05 04:14:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:14.247187 | orchestrator | 2026-04-05 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:17.300213 | orchestrator | 2026-04-05 04:14:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:17.301462 | orchestrator | 2026-04-05 04:14:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:17.301508 | orchestrator | 2026-04-05 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:20.348234 | orchestrator | 2026-04-05 04:14:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:20.348861 | orchestrator | 2026-04-05 04:14:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:20.348956 | orchestrator | 2026-04-05 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:23.409547 | orchestrator | 2026-04-05 04:14:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:23.411251 | orchestrator | 2026-04-05 04:14:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:23.411288 | orchestrator | 2026-04-05 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:26.449599 | orchestrator | 2026-04-05 04:14:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:26.452577 | orchestrator | 2026-04-05 04:14:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:26.452652 | orchestrator | 2026-04-05 04:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:29.500270 | orchestrator | 2026-04-05 04:14:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:29.501990 | orchestrator | 2026-04-05 04:14:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:29.502082 | orchestrator | 2026-04-05 04:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:32.540314 | orchestrator | 2026-04-05 04:14:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:32.541898 | orchestrator | 2026-04-05 04:14:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:32.542204 | orchestrator | 2026-04-05 04:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:35.583131 | orchestrator | 2026-04-05 04:14:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:35.584460 | orchestrator | 2026-04-05 04:14:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:35.584522 | orchestrator | 2026-04-05 04:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:38.629318 | orchestrator | 2026-04-05 04:14:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:38.630596 | orchestrator | 2026-04-05 04:14:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:38.630655 | orchestrator | 2026-04-05 04:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:41.680123 | orchestrator | 2026-04-05 04:14:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:41.681538 | orchestrator | 2026-04-05 04:14:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:41.681629 | orchestrator | 2026-04-05 04:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:44.729357 | orchestrator | 2026-04-05 04:14:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:44.731213 | orchestrator | 2026-04-05 04:14:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:44.731298 | orchestrator | 2026-04-05 04:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:47.782700 | orchestrator | 2026-04-05 04:14:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:47.784547 | orchestrator | 2026-04-05 04:14:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:47.784609 | orchestrator | 2026-04-05 04:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:50.822667 | orchestrator | 2026-04-05 04:14:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:50.823583 | orchestrator | 2026-04-05 04:14:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:50.823644 | orchestrator | 2026-04-05 04:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:53.873533 | orchestrator | 2026-04-05 04:14:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:53.874629 | orchestrator | 2026-04-05 04:14:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:53.874838 | orchestrator | 2026-04-05 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:56.918991 | orchestrator | 2026-04-05 04:14:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:56.921626 | orchestrator | 2026-04-05 04:14:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:56.921689 | orchestrator | 2026-04-05 04:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:14:59.968424 | orchestrator | 2026-04-05 04:14:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:14:59.971402 | orchestrator | 2026-04-05 04:14:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:14:59.971470 | orchestrator | 2026-04-05 04:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:03.015861 | orchestrator | 2026-04-05 04:15:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:03.016225 | orchestrator | 2026-04-05 04:15:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:03.016251 | orchestrator | 2026-04-05 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:06.057256 | orchestrator | 2026-04-05 04:15:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:06.059248 | orchestrator | 2026-04-05 04:15:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:06.059300 | orchestrator | 2026-04-05 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:09.106725 | orchestrator | 2026-04-05 04:15:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:09.109018 | orchestrator | 2026-04-05 04:15:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:09.109073 | orchestrator | 2026-04-05 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:12.156486 | orchestrator | 2026-04-05 04:15:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:12.158885 | orchestrator | 2026-04-05 04:15:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:12.159058 | orchestrator | 2026-04-05 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:15.208506 | orchestrator | 2026-04-05 04:15:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:15.210633 | orchestrator | 2026-04-05 04:15:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:15.210715 | orchestrator | 2026-04-05 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:18.251262 | orchestrator | 2026-04-05 04:15:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:18.255071 | orchestrator | 2026-04-05 04:15:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:18.255184 | orchestrator | 2026-04-05 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:21.297399 | orchestrator | 2026-04-05 04:15:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:21.299440 | orchestrator | 2026-04-05 04:15:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:21.299494 | orchestrator | 2026-04-05 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:24.351376 | orchestrator | 2026-04-05 04:15:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:24.353233 | orchestrator | 2026-04-05 04:15:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:24.353292 | orchestrator | 2026-04-05 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:27.407122 | orchestrator | 2026-04-05 04:15:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:27.409321 | orchestrator | 2026-04-05 04:15:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:27.409365 | orchestrator | 2026-04-05 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:30.450771 | orchestrator | 2026-04-05 04:15:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:30.451999 | orchestrator | 2026-04-05 04:15:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:30.452059 | orchestrator | 2026-04-05 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:33.495505 | orchestrator | 2026-04-05 04:15:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:33.496050 | orchestrator | 2026-04-05 04:15:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:33.496101 | orchestrator | 2026-04-05 04:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:36.542188 | orchestrator | 2026-04-05 04:15:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:36.544449 | orchestrator | 2026-04-05 04:15:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:36.544515 | orchestrator | 2026-04-05 04:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:39.594822 | orchestrator | 2026-04-05 04:15:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:39.597382 | orchestrator | 2026-04-05 04:15:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:39.597455 | orchestrator | 2026-04-05 04:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:42.648760 | orchestrator | 2026-04-05 04:15:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:42.650986 | orchestrator | 2026-04-05 04:15:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:42.651051 | orchestrator | 2026-04-05 04:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:45.700359 | orchestrator | 2026-04-05 04:15:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:45.701825 | orchestrator | 2026-04-05 04:15:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:45.701871 | orchestrator | 2026-04-05 04:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:48.749524 | orchestrator | 2026-04-05 04:15:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:48.751262 | orchestrator | 2026-04-05 04:15:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:48.751306 | orchestrator | 2026-04-05 04:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:51.793555 | orchestrator | 2026-04-05 04:15:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:51.794236 | orchestrator | 2026-04-05 04:15:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:51.794272 | orchestrator | 2026-04-05 04:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:54.836643 | orchestrator | 2026-04-05 04:15:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:54.840166 | orchestrator | 2026-04-05 04:15:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:54.840230 | orchestrator | 2026-04-05 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:15:57.899005 | orchestrator | 2026-04-05 04:15:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:15:57.903074 | orchestrator | 2026-04-05 04:15:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:15:57.903178 | orchestrator | 2026-04-05 04:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:00.950293 | orchestrator | 2026-04-05 04:16:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:00.952263 | orchestrator | 2026-04-05 04:16:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:00.952385 | orchestrator | 2026-04-05 04:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:04.009652 | orchestrator | 2026-04-05 04:16:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:04.012385 | orchestrator | 2026-04-05 04:16:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:04.012452 | orchestrator | 2026-04-05 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:07.054254 | orchestrator | 2026-04-05 04:16:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:07.055574 | orchestrator | 2026-04-05 04:16:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:07.055616 | orchestrator | 2026-04-05 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:10.096335 | orchestrator | 2026-04-05 04:16:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:10.096968 | orchestrator | 2026-04-05 04:16:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:10.097018 | orchestrator | 2026-04-05 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:13.144370 | orchestrator | 2026-04-05 04:16:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:13.147236 | orchestrator | 2026-04-05 04:16:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:13.147316 | orchestrator | 2026-04-05 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:16.195370 | orchestrator | 2026-04-05 04:16:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:16.197603 | orchestrator | 2026-04-05 04:16:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:16.197645 | orchestrator | 2026-04-05 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:19.252194 | orchestrator | 2026-04-05 04:16:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:19.255837 | orchestrator | 2026-04-05 04:16:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:19.255894 | orchestrator | 2026-04-05 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:22.312813 | orchestrator | 2026-04-05 04:16:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:22.316515 | orchestrator | 2026-04-05 04:16:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:22.316581 | orchestrator | 2026-04-05 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:25.365490 | orchestrator | 2026-04-05 04:16:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:25.366525 | orchestrator | 2026-04-05 04:16:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:25.366564 | orchestrator | 2026-04-05 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:28.413832 | orchestrator | 2026-04-05 04:16:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:28.414950 | orchestrator | 2026-04-05 04:16:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:28.414994 | orchestrator | 2026-04-05 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:31.471578 | orchestrator | 2026-04-05 04:16:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:31.472701 | orchestrator | 2026-04-05 04:16:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:31.472733 | orchestrator | 2026-04-05 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:34.522391 | orchestrator | 2026-04-05 04:16:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:34.523624 | orchestrator | 2026-04-05 04:16:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:34.523691 | orchestrator | 2026-04-05 04:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:37.568248 | orchestrator | 2026-04-05 04:16:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:37.569204 | orchestrator | 2026-04-05 04:16:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:37.569252 | orchestrator | 2026-04-05 04:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:40.613346 | orchestrator | 2026-04-05 04:16:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:40.615773 | orchestrator | 2026-04-05 04:16:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:40.615838 | orchestrator | 2026-04-05 04:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:43.665166 | orchestrator | 2026-04-05 04:16:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:43.667614 | orchestrator | 2026-04-05 04:16:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:43.667721 | orchestrator | 2026-04-05 04:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:46.722453 | orchestrator | 2026-04-05 04:16:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:46.724255 | orchestrator | 2026-04-05 04:16:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:46.724323 | orchestrator | 2026-04-05 04:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:49.778705 | orchestrator | 2026-04-05 04:16:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:49.780868 | orchestrator | 2026-04-05 04:16:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:49.780954 | orchestrator | 2026-04-05 04:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:52.834511 | orchestrator | 2026-04-05 04:16:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:52.835672 | orchestrator | 2026-04-05 04:16:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:52.835706 | orchestrator | 2026-04-05 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:55.882649 | orchestrator | 2026-04-05 04:16:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:55.884248 | orchestrator | 2026-04-05 04:16:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:55.884288 | orchestrator | 2026-04-05 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:16:58.935248 | orchestrator | 2026-04-05 04:16:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:16:58.937678 | orchestrator | 2026-04-05 04:16:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:16:58.937880 | orchestrator | 2026-04-05 04:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:01.989294 | orchestrator | 2026-04-05 04:17:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:01.991874 | orchestrator | 2026-04-05 04:17:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:01.991980 | orchestrator | 2026-04-05 04:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:05.040321 | orchestrator | 2026-04-05 04:17:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:05.040935 | orchestrator | 2026-04-05 04:17:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:05.041030 | orchestrator | 2026-04-05 04:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:08.091428 | orchestrator | 2026-04-05 04:17:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:08.093340 | orchestrator | 2026-04-05 04:17:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:08.093397 | orchestrator | 2026-04-05 04:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:11.131327 | orchestrator | 2026-04-05 04:17:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:11.132437 | orchestrator | 2026-04-05 04:17:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:11.132966 | orchestrator | 2026-04-05 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:14.177231 | orchestrator | 2026-04-05 04:17:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:14.178703 | orchestrator | 2026-04-05 04:17:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:14.178755 | orchestrator | 2026-04-05 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:17.214004 | orchestrator | 2026-04-05 04:17:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:17.215888 | orchestrator | 2026-04-05 04:17:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:17.215988 | orchestrator | 2026-04-05 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:20.259675 | orchestrator | 2026-04-05 04:17:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:20.260878 | orchestrator | 2026-04-05 04:17:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:20.260965 | orchestrator | 2026-04-05 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:23.316471 | orchestrator | 2026-04-05 04:17:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:23.319086 | orchestrator | 2026-04-05 04:17:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:23.319312 | orchestrator | 2026-04-05 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:26.361753 | orchestrator | 2026-04-05 04:17:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:26.363557 | orchestrator | 2026-04-05 04:17:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:26.363699 | orchestrator | 2026-04-05 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:29.415303 | orchestrator | 2026-04-05 04:17:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:29.416896 | orchestrator | 2026-04-05 04:17:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:29.417014 | orchestrator | 2026-04-05 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:32.463250 | orchestrator | 2026-04-05 04:17:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:32.465132 | orchestrator | 2026-04-05 04:17:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:32.465183 | orchestrator | 2026-04-05 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:35.513959 | orchestrator | 2026-04-05 04:17:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:35.515609 | orchestrator | 2026-04-05 04:17:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:35.515640 | orchestrator | 2026-04-05 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:38.562251 | orchestrator | 2026-04-05 04:17:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:38.564806 | orchestrator | 2026-04-05 04:17:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:38.564871 | orchestrator | 2026-04-05 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:41.608947 | orchestrator | 2026-04-05 04:17:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:41.610110 | orchestrator | 2026-04-05 04:17:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:41.610146 | orchestrator | 2026-04-05 04:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:44.661743 | orchestrator | 2026-04-05 04:17:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:44.663645 | orchestrator | 2026-04-05 04:17:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:44.663723 | orchestrator | 2026-04-05 04:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:47.708440 | orchestrator | 2026-04-05 04:17:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:47.710426 | orchestrator | 2026-04-05 04:17:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:47.710514 | orchestrator | 2026-04-05 04:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:50.759828 | orchestrator | 2026-04-05 04:17:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:50.762308 | orchestrator | 2026-04-05 04:17:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:50.762380 | orchestrator | 2026-04-05 04:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:53.810800 | orchestrator | 2026-04-05 04:17:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:53.812396 | orchestrator | 2026-04-05 04:17:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:53.812446 | orchestrator | 2026-04-05 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:56.861120 | orchestrator | 2026-04-05 04:17:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:56.862357 | orchestrator | 2026-04-05 04:17:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:56.862420 | orchestrator | 2026-04-05 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:17:59.914091 | orchestrator | 2026-04-05 04:17:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:17:59.915713 | orchestrator | 2026-04-05 04:17:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:17:59.915747 | orchestrator | 2026-04-05 04:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:02.964325 | orchestrator | 2026-04-05 04:18:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:02.966262 | orchestrator | 2026-04-05 04:18:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:02.966354 | orchestrator | 2026-04-05 04:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:06.019963 | orchestrator | 2026-04-05 04:18:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:06.021539 | orchestrator | 2026-04-05 04:18:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:06.021616 | orchestrator | 2026-04-05 04:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:09.074305 | orchestrator | 2026-04-05 04:18:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:09.075597 | orchestrator | 2026-04-05 04:18:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:09.075643 | orchestrator | 2026-04-05 04:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:12.122000 | orchestrator | 2026-04-05 04:18:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:12.123566 | orchestrator | 2026-04-05 04:18:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:12.123628 | orchestrator | 2026-04-05 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:15.177074 | orchestrator | 2026-04-05 04:18:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:15.178399 | orchestrator | 2026-04-05 04:18:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:15.178447 | orchestrator | 2026-04-05 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:18.220314 | orchestrator | 2026-04-05 04:18:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:18.222497 | orchestrator | 2026-04-05 04:18:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:18.222586 | orchestrator | 2026-04-05 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:21.273286 | orchestrator | 2026-04-05 04:18:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:21.274620 | orchestrator | 2026-04-05 04:18:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:21.274646 | orchestrator | 2026-04-05 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:24.332219 | orchestrator | 2026-04-05 04:18:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:24.335284 | orchestrator | 2026-04-05 04:18:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:24.335333 | orchestrator | 2026-04-05 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:27.388216 | orchestrator | 2026-04-05 04:18:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:27.389391 | orchestrator | 2026-04-05 04:18:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:27.389441 | orchestrator | 2026-04-05 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:30.433000 | orchestrator | 2026-04-05 04:18:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:30.433964 | orchestrator | 2026-04-05 04:18:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:30.434072 | orchestrator | 2026-04-05 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:33.486214 | orchestrator | 2026-04-05 04:18:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:33.488890 | orchestrator | 2026-04-05 04:18:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:33.489048 | orchestrator | 2026-04-05 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:36.539767 | orchestrator | 2026-04-05 04:18:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:36.541203 | orchestrator | 2026-04-05 04:18:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:36.541257 | orchestrator | 2026-04-05 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:39.590542 | orchestrator | 2026-04-05 04:18:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:39.592759 | orchestrator | 2026-04-05 04:18:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:39.592850 | orchestrator | 2026-04-05 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:42.641442 | orchestrator | 2026-04-05 04:18:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:42.644261 | orchestrator | 2026-04-05 04:18:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:42.644417 | orchestrator | 2026-04-05 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:45.683705 | orchestrator | 2026-04-05 04:18:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:45.686222 | orchestrator | 2026-04-05 04:18:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:45.686281 | orchestrator | 2026-04-05 04:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:48.740860 | orchestrator | 2026-04-05 04:18:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:48.743763 | orchestrator | 2026-04-05 04:18:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:48.743832 | orchestrator | 2026-04-05 04:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:51.797054 | orchestrator | 2026-04-05 04:18:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:51.798741 | orchestrator | 2026-04-05 04:18:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:51.798803 | orchestrator | 2026-04-05 04:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:54.850596 | orchestrator | 2026-04-05 04:18:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:54.852781 | orchestrator | 2026-04-05 04:18:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:54.852985 | orchestrator | 2026-04-05 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:18:57.903681 | orchestrator | 2026-04-05 04:18:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:18:57.906545 | orchestrator | 2026-04-05 04:18:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:18:57.906664 | orchestrator | 2026-04-05 04:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:00.956522 | orchestrator | 2026-04-05 04:19:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:00.959833 | orchestrator | 2026-04-05 04:19:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:00.960166 | orchestrator | 2026-04-05 04:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:04.019814 | orchestrator | 2026-04-05 04:19:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:04.021609 | orchestrator | 2026-04-05 04:19:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:04.021677 | orchestrator | 2026-04-05 04:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:07.064295 | orchestrator | 2026-04-05 04:19:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:07.065320 | orchestrator | 2026-04-05 04:19:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:07.065354 | orchestrator | 2026-04-05 04:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:10.121560 | orchestrator | 2026-04-05 04:19:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:10.123336 | orchestrator | 2026-04-05 04:19:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:10.123426 | orchestrator | 2026-04-05 04:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:13.165126 | orchestrator | 2026-04-05 04:19:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:13.166118 | orchestrator | 2026-04-05 04:19:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:13.166163 | orchestrator | 2026-04-05 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:16.216040 | orchestrator | 2026-04-05 04:19:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:16.218577 | orchestrator | 2026-04-05 04:19:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:16.218604 | orchestrator | 2026-04-05 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:19.272185 | orchestrator | 2026-04-05 04:19:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:19.273025 | orchestrator | 2026-04-05 04:19:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:19.273087 | orchestrator | 2026-04-05 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:22.318256 | orchestrator | 2026-04-05 04:19:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:22.321330 | orchestrator | 2026-04-05 04:19:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:22.321404 | orchestrator | 2026-04-05 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:25.376504 | orchestrator | 2026-04-05 04:19:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:25.378431 | orchestrator | 2026-04-05 04:19:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:25.378498 | orchestrator | 2026-04-05 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:28.425975 | orchestrator | 2026-04-05 04:19:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:28.426874 | orchestrator | 2026-04-05 04:19:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:28.426966 | orchestrator | 2026-04-05 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:31.460637 | orchestrator | 2026-04-05 04:19:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:31.462407 | orchestrator | 2026-04-05 04:19:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:31.462431 | orchestrator | 2026-04-05 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:34.506872 | orchestrator | 2026-04-05 04:19:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:34.513223 | orchestrator | 2026-04-05 04:19:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:34.513384 | orchestrator | 2026-04-05 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:37.567711 | orchestrator | 2026-04-05 04:19:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:37.571570 | orchestrator | 2026-04-05 04:19:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:37.571733 | orchestrator | 2026-04-05 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:40.630239 | orchestrator | 2026-04-05 04:19:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:40.631261 | orchestrator | 2026-04-05 04:19:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:40.631318 | orchestrator | 2026-04-05 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:43.679808 | orchestrator | 2026-04-05 04:19:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:43.682676 | orchestrator | 2026-04-05 04:19:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:43.682761 | orchestrator | 2026-04-05 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:46.735624 | orchestrator | 2026-04-05 04:19:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:46.737696 | orchestrator | 2026-04-05 04:19:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:46.737797 | orchestrator | 2026-04-05 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:49.784475 | orchestrator | 2026-04-05 04:19:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:49.786675 | orchestrator | 2026-04-05 04:19:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:49.786745 | orchestrator | 2026-04-05 04:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:52.841329 | orchestrator | 2026-04-05 04:19:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:52.843768 | orchestrator | 2026-04-05 04:19:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:52.843824 | orchestrator | 2026-04-05 04:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:55.893799 | orchestrator | 2026-04-05 04:19:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:55.894817 | orchestrator | 2026-04-05 04:19:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:55.894846 | orchestrator | 2026-04-05 04:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:19:58.947919 | orchestrator | 2026-04-05 04:19:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:19:58.948617 | orchestrator | 2026-04-05 04:19:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:19:58.948664 | orchestrator | 2026-04-05 04:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:02.001405 | orchestrator | 2026-04-05 04:20:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:02.004469 | orchestrator | 2026-04-05 04:20:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:02.004539 | orchestrator | 2026-04-05 04:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:05.056143 | orchestrator | 2026-04-05 04:20:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:05.057716 | orchestrator | 2026-04-05 04:20:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:05.057773 | orchestrator | 2026-04-05 04:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:08.101758 | orchestrator | 2026-04-05 04:20:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:08.103152 | orchestrator | 2026-04-05 04:20:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:08.103254 | orchestrator | 2026-04-05 04:20:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:11.149288 | orchestrator | 2026-04-05 04:20:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:11.150898 | orchestrator | 2026-04-05 04:20:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:11.150951 | orchestrator | 2026-04-05 04:20:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:14.196239 | orchestrator | 2026-04-05 04:20:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:14.199430 | orchestrator | 2026-04-05 04:20:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:14.199528 | orchestrator | 2026-04-05 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:17.243000 | orchestrator | 2026-04-05 04:20:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:17.244691 | orchestrator | 2026-04-05 04:20:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:17.244870 | orchestrator | 2026-04-05 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:20.294639 | orchestrator | 2026-04-05 04:20:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:20.295766 | orchestrator | 2026-04-05 04:20:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:20.295862 | orchestrator | 2026-04-05 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:23.344788 | orchestrator | 2026-04-05 04:20:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:23.345377 | orchestrator | 2026-04-05 04:20:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:23.345416 | orchestrator | 2026-04-05 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:26.388439 | orchestrator | 2026-04-05 04:20:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:26.391150 | orchestrator | 2026-04-05 04:20:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:26.391330 | orchestrator | 2026-04-05 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:29.437768 | orchestrator | 2026-04-05 04:20:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:29.440044 | orchestrator | 2026-04-05 04:20:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:29.440119 | orchestrator | 2026-04-05 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:32.480851 | orchestrator | 2026-04-05 04:20:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:32.481949 | orchestrator | 2026-04-05 04:20:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:32.481988 | orchestrator | 2026-04-05 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:35.528965 | orchestrator | 2026-04-05 04:20:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:35.529661 | orchestrator | 2026-04-05 04:20:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:35.529719 | orchestrator | 2026-04-05 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:38.583957 | orchestrator | 2026-04-05 04:20:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:38.586519 | orchestrator | 2026-04-05 04:20:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:38.586612 | orchestrator | 2026-04-05 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:41.627671 | orchestrator | 2026-04-05 04:20:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:41.629604 | orchestrator | 2026-04-05 04:20:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:41.629651 | orchestrator | 2026-04-05 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:44.678562 | orchestrator | 2026-04-05 04:20:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:44.679749 | orchestrator | 2026-04-05 04:20:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:44.679807 | orchestrator | 2026-04-05 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:47.727790 | orchestrator | 2026-04-05 04:20:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:47.729790 | orchestrator | 2026-04-05 04:20:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:47.729829 | orchestrator | 2026-04-05 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:50.783383 | orchestrator | 2026-04-05 04:20:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:50.785285 | orchestrator | 2026-04-05 04:20:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:50.785370 | orchestrator | 2026-04-05 04:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:53.832120 | orchestrator | 2026-04-05 04:20:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:53.834248 | orchestrator | 2026-04-05 04:20:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:53.834387 | orchestrator | 2026-04-05 04:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:56.908006 | orchestrator | 2026-04-05 04:20:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:56.910480 | orchestrator | 2026-04-05 04:20:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:56.910602 | orchestrator | 2026-04-05 04:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:20:59.970746 | orchestrator | 2026-04-05 04:20:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:20:59.974415 | orchestrator | 2026-04-05 04:20:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:20:59.974546 | orchestrator | 2026-04-05 04:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:03.021323 | orchestrator | 2026-04-05 04:21:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:03.022182 | orchestrator | 2026-04-05 04:21:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:03.022224 | orchestrator | 2026-04-05 04:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:06.063661 | orchestrator | 2026-04-05 04:21:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:06.064588 | orchestrator | 2026-04-05 04:21:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:06.064634 | orchestrator | 2026-04-05 04:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:09.110126 | orchestrator | 2026-04-05 04:21:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:09.111782 | orchestrator | 2026-04-05 04:21:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:09.111854 | orchestrator | 2026-04-05 04:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:12.158504 | orchestrator | 2026-04-05 04:21:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:12.160743 | orchestrator | 2026-04-05 04:21:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:12.160800 | orchestrator | 2026-04-05 04:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:15.205008 | orchestrator | 2026-04-05 04:21:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:15.207522 | orchestrator | 2026-04-05 04:21:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:15.207590 | orchestrator | 2026-04-05 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:18.257930 | orchestrator | 2026-04-05 04:21:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:18.260392 | orchestrator | 2026-04-05 04:21:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:18.260457 | orchestrator | 2026-04-05 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:21.300658 | orchestrator | 2026-04-05 04:21:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:21.302537 | orchestrator | 2026-04-05 04:21:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:21.302614 | orchestrator | 2026-04-05 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:24.351915 | orchestrator | 2026-04-05 04:21:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:24.352610 | orchestrator | 2026-04-05 04:21:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:24.353619 | orchestrator | 2026-04-05 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:27.403781 | orchestrator | 2026-04-05 04:21:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:27.406661 | orchestrator | 2026-04-05 04:21:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:27.406716 | orchestrator | 2026-04-05 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:30.452480 | orchestrator | 2026-04-05 04:21:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:30.453799 | orchestrator | 2026-04-05 04:21:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:30.453853 | orchestrator | 2026-04-05 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:33.505725 | orchestrator | 2026-04-05 04:21:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:33.507964 | orchestrator | 2026-04-05 04:21:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:33.508028 | orchestrator | 2026-04-05 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:36.557273 | orchestrator | 2026-04-05 04:21:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:36.558200 | orchestrator | 2026-04-05 04:21:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:36.558489 | orchestrator | 2026-04-05 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:39.609838 | orchestrator | 2026-04-05 04:21:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:39.611551 | orchestrator | 2026-04-05 04:21:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:39.611742 | orchestrator | 2026-04-05 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:42.655451 | orchestrator | 2026-04-05 04:21:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:42.656730 | orchestrator | 2026-04-05 04:21:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:42.656771 | orchestrator | 2026-04-05 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:45.700506 | orchestrator | 2026-04-05 04:21:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:45.701763 | orchestrator | 2026-04-05 04:21:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:45.702197 | orchestrator | 2026-04-05 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:48.748910 | orchestrator | 2026-04-05 04:21:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:48.750702 | orchestrator | 2026-04-05 04:21:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:48.750750 | orchestrator | 2026-04-05 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:51.803175 | orchestrator | 2026-04-05 04:21:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:51.804302 | orchestrator | 2026-04-05 04:21:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:51.804348 | orchestrator | 2026-04-05 04:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:54.855619 | orchestrator | 2026-04-05 04:21:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:54.857482 | orchestrator | 2026-04-05 04:21:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:54.857578 | orchestrator | 2026-04-05 04:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:21:57.902940 | orchestrator | 2026-04-05 04:21:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:21:57.905290 | orchestrator | 2026-04-05 04:21:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:21:57.905356 | orchestrator | 2026-04-05 04:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:00.952751 | orchestrator | 2026-04-05 04:22:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:00.954824 | orchestrator | 2026-04-05 04:22:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:00.955226 | orchestrator | 2026-04-05 04:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:04.005749 | orchestrator | 2026-04-05 04:22:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:04.009236 | orchestrator | 2026-04-05 04:22:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:04.009306 | orchestrator | 2026-04-05 04:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:07.054436 | orchestrator | 2026-04-05 04:22:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:07.055459 | orchestrator | 2026-04-05 04:22:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:07.055535 | orchestrator | 2026-04-05 04:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:10.095264 | orchestrator | 2026-04-05 04:22:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:10.095837 | orchestrator | 2026-04-05 04:22:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:10.095936 | orchestrator | 2026-04-05 04:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:13.144580 | orchestrator | 2026-04-05 04:22:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:13.146307 | orchestrator | 2026-04-05 04:22:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:13.146453 | orchestrator | 2026-04-05 04:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:16.194536 | orchestrator | 2026-04-05 04:22:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:16.195511 | orchestrator | 2026-04-05 04:22:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:16.195650 | orchestrator | 2026-04-05 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:19.252732 | orchestrator | 2026-04-05 04:22:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:19.255290 | orchestrator | 2026-04-05 04:22:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:19.255371 | orchestrator | 2026-04-05 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:22.299586 | orchestrator | 2026-04-05 04:22:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:22.301965 | orchestrator | 2026-04-05 04:22:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:22.302187 | orchestrator | 2026-04-05 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:25.342468 | orchestrator | 2026-04-05 04:22:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:25.343939 | orchestrator | 2026-04-05 04:22:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:25.343978 | orchestrator | 2026-04-05 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:28.386227 | orchestrator | 2026-04-05 04:22:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:28.388068 | orchestrator | 2026-04-05 04:22:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:28.388096 | orchestrator | 2026-04-05 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:31.441061 | orchestrator | 2026-04-05 04:22:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:31.442816 | orchestrator | 2026-04-05 04:22:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:31.443007 | orchestrator | 2026-04-05 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:34.485947 | orchestrator | 2026-04-05 04:22:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:34.488493 | orchestrator | 2026-04-05 04:22:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:34.488590 | orchestrator | 2026-04-05 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:37.534719 | orchestrator | 2026-04-05 04:22:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:37.536127 | orchestrator | 2026-04-05 04:22:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:37.536175 | orchestrator | 2026-04-05 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:40.578087 | orchestrator | 2026-04-05 04:22:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:40.579134 | orchestrator | 2026-04-05 04:22:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:40.579171 | orchestrator | 2026-04-05 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:43.633418 | orchestrator | 2026-04-05 04:22:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:43.635425 | orchestrator | 2026-04-05 04:22:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:43.635485 | orchestrator | 2026-04-05 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:46.689797 | orchestrator | 2026-04-05 04:22:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:46.691661 | orchestrator | 2026-04-05 04:22:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:46.691756 | orchestrator | 2026-04-05 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:49.739967 | orchestrator | 2026-04-05 04:22:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:49.743281 | orchestrator | 2026-04-05 04:22:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:49.743368 | orchestrator | 2026-04-05 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:52.789791 | orchestrator | 2026-04-05 04:22:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:52.791263 | orchestrator | 2026-04-05 04:22:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:52.791306 | orchestrator | 2026-04-05 04:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:55.831124 | orchestrator | 2026-04-05 04:22:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:55.832033 | orchestrator | 2026-04-05 04:22:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:55.832067 | orchestrator | 2026-04-05 04:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:22:58.881999 | orchestrator | 2026-04-05 04:22:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:22:58.884615 | orchestrator | 2026-04-05 04:22:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:22:58.884681 | orchestrator | 2026-04-05 04:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:01.926117 | orchestrator | 2026-04-05 04:23:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:01.928308 | orchestrator | 2026-04-05 04:23:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:01.928386 | orchestrator | 2026-04-05 04:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:04.967962 | orchestrator | 2026-04-05 04:23:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:04.971150 | orchestrator | 2026-04-05 04:23:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:04.971227 | orchestrator | 2026-04-05 04:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:08.024566 | orchestrator | 2026-04-05 04:23:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:08.027199 | orchestrator | 2026-04-05 04:23:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:08.027272 | orchestrator | 2026-04-05 04:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:11.063887 | orchestrator | 2026-04-05 04:23:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:11.066377 | orchestrator | 2026-04-05 04:23:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:11.066436 | orchestrator | 2026-04-05 04:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:14.112047 | orchestrator | 2026-04-05 04:23:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:14.113683 | orchestrator | 2026-04-05 04:23:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:14.113730 | orchestrator | 2026-04-05 04:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:17.157396 | orchestrator | 2026-04-05 04:23:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:17.160220 | orchestrator | 2026-04-05 04:23:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:17.160291 | orchestrator | 2026-04-05 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:20.209719 | orchestrator | 2026-04-05 04:23:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:20.212135 | orchestrator | 2026-04-05 04:23:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:20.212188 | orchestrator | 2026-04-05 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:23.251464 | orchestrator | 2026-04-05 04:23:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:23.253264 | orchestrator | 2026-04-05 04:23:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:23.253364 | orchestrator | 2026-04-05 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:26.298238 | orchestrator | 2026-04-05 04:23:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:26.301505 | orchestrator | 2026-04-05 04:23:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:26.301578 | orchestrator | 2026-04-05 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:29.348114 | orchestrator | 2026-04-05 04:23:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:29.350306 | orchestrator | 2026-04-05 04:23:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:29.350368 | orchestrator | 2026-04-05 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:32.397350 | orchestrator | 2026-04-05 04:23:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:32.399231 | orchestrator | 2026-04-05 04:23:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:32.399305 | orchestrator | 2026-04-05 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:35.448622 | orchestrator | 2026-04-05 04:23:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:35.450256 | orchestrator | 2026-04-05 04:23:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:35.450295 | orchestrator | 2026-04-05 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:38.494284 | orchestrator | 2026-04-05 04:23:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:38.495546 | orchestrator | 2026-04-05 04:23:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:38.495575 | orchestrator | 2026-04-05 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:41.538577 | orchestrator | 2026-04-05 04:23:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:41.539271 | orchestrator | 2026-04-05 04:23:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:41.539335 | orchestrator | 2026-04-05 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:44.579954 | orchestrator | 2026-04-05 04:23:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:44.581254 | orchestrator | 2026-04-05 04:23:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:44.581306 | orchestrator | 2026-04-05 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:47.632311 | orchestrator | 2026-04-05 04:23:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:47.633529 | orchestrator | 2026-04-05 04:23:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:47.633666 | orchestrator | 2026-04-05 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:50.681064 | orchestrator | 2026-04-05 04:23:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:50.681418 | orchestrator | 2026-04-05 04:23:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:50.681436 | orchestrator | 2026-04-05 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:53.731633 | orchestrator | 2026-04-05 04:23:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:53.732945 | orchestrator | 2026-04-05 04:23:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:53.733002 | orchestrator | 2026-04-05 04:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:56.778373 | orchestrator | 2026-04-05 04:23:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:56.779268 | orchestrator | 2026-04-05 04:23:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:56.779349 | orchestrator | 2026-04-05 04:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:23:59.828211 | orchestrator | 2026-04-05 04:23:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:23:59.830182 | orchestrator | 2026-04-05 04:23:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:23:59.830301 | orchestrator | 2026-04-05 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:02.885401 | orchestrator | 2026-04-05 04:24:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:02.886643 | orchestrator | 2026-04-05 04:24:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:02.886726 | orchestrator | 2026-04-05 04:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:05.940922 | orchestrator | 2026-04-05 04:24:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:05.942413 | orchestrator | 2026-04-05 04:24:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:05.942492 | orchestrator | 2026-04-05 04:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:08.995095 | orchestrator | 2026-04-05 04:24:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:08.997097 | orchestrator | 2026-04-05 04:24:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:08.997428 | orchestrator | 2026-04-05 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:12.048340 | orchestrator | 2026-04-05 04:24:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:12.051183 | orchestrator | 2026-04-05 04:24:12 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:12.051245 | orchestrator | 2026-04-05 04:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:15.097014 | orchestrator | 2026-04-05 04:24:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:15.099087 | orchestrator | 2026-04-05 04:24:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:15.099179 | orchestrator | 2026-04-05 04:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:18.147132 | orchestrator | 2026-04-05 04:24:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:18.149051 | orchestrator | 2026-04-05 04:24:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:18.149605 | orchestrator | 2026-04-05 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:21.200738 | orchestrator | 2026-04-05 04:24:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:21.202145 | orchestrator | 2026-04-05 04:24:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:21.202185 | orchestrator | 2026-04-05 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:24.247111 | orchestrator | 2026-04-05 04:24:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:24.248232 | orchestrator | 2026-04-05 04:24:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:24.248294 | orchestrator | 2026-04-05 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:27.289819 | orchestrator | 2026-04-05 04:24:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:27.291900 | orchestrator | 2026-04-05 04:24:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:27.291942 | orchestrator | 2026-04-05 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:30.342801 | orchestrator | 2026-04-05 04:24:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:30.343678 | orchestrator | 2026-04-05 04:24:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:30.343706 | orchestrator | 2026-04-05 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:33.392030 | orchestrator | 2026-04-05 04:24:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:33.393192 | orchestrator | 2026-04-05 04:24:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:33.393257 | orchestrator | 2026-04-05 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:36.436610 | orchestrator | 2026-04-05 04:24:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:36.439201 | orchestrator | 2026-04-05 04:24:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:36.439361 | orchestrator | 2026-04-05 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:39.493767 | orchestrator | 2026-04-05 04:24:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:39.495746 | orchestrator | 2026-04-05 04:24:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:39.495810 | orchestrator | 2026-04-05 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:42.544279 | orchestrator | 2026-04-05 04:24:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:42.546355 | orchestrator | 2026-04-05 04:24:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:42.546421 | orchestrator | 2026-04-05 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:45.598999 | orchestrator | 2026-04-05 04:24:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:45.601631 | orchestrator | 2026-04-05 04:24:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:45.601662 | orchestrator | 2026-04-05 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:48.651512 | orchestrator | 2026-04-05 04:24:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:48.651635 | orchestrator | 2026-04-05 04:24:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:48.651646 | orchestrator | 2026-04-05 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:51.705751 | orchestrator | 2026-04-05 04:24:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:51.707208 | orchestrator | 2026-04-05 04:24:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:51.707373 | orchestrator | 2026-04-05 04:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:54.753623 | orchestrator | 2026-04-05 04:24:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:54.754460 | orchestrator | 2026-04-05 04:24:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:54.754497 | orchestrator | 2026-04-05 04:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:24:57.799621 | orchestrator | 2026-04-05 04:24:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:24:57.802358 | orchestrator | 2026-04-05 04:24:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:24:57.802431 | orchestrator | 2026-04-05 04:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:00.845019 | orchestrator | 2026-04-05 04:25:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:00.846543 | orchestrator | 2026-04-05 04:25:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:00.846598 | orchestrator | 2026-04-05 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:03.902451 | orchestrator | 2026-04-05 04:25:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:03.905626 | orchestrator | 2026-04-05 04:25:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:03.905708 | orchestrator | 2026-04-05 04:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:06.956247 | orchestrator | 2026-04-05 04:25:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:06.958936 | orchestrator | 2026-04-05 04:25:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:06.959023 | orchestrator | 2026-04-05 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:10.011217 | orchestrator | 2026-04-05 04:25:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:10.013410 | orchestrator | 2026-04-05 04:25:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:10.013498 | orchestrator | 2026-04-05 04:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:13.058791 | orchestrator | 2026-04-05 04:25:13 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:13.060108 | orchestrator | 2026-04-05 04:25:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:13.060161 | orchestrator | 2026-04-05 04:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:16.104703 | orchestrator | 2026-04-05 04:25:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:16.106360 | orchestrator | 2026-04-05 04:25:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:16.106438 | orchestrator | 2026-04-05 04:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:19.157805 | orchestrator | 2026-04-05 04:25:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:19.159266 | orchestrator | 2026-04-05 04:25:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:19.159753 | orchestrator | 2026-04-05 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:22.199720 | orchestrator | 2026-04-05 04:25:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:22.200751 | orchestrator | 2026-04-05 04:25:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:22.200803 | orchestrator | 2026-04-05 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:25.250192 | orchestrator | 2026-04-05 04:25:25 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:25.251919 | orchestrator | 2026-04-05 04:25:25 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:25.251984 | orchestrator | 2026-04-05 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:28.302744 | orchestrator | 2026-04-05 04:25:28 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:28.305777 | orchestrator | 2026-04-05 04:25:28 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:28.305877 | orchestrator | 2026-04-05 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:31.357478 | orchestrator | 2026-04-05 04:25:31 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:31.358141 | orchestrator | 2026-04-05 04:25:31 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:31.358173 | orchestrator | 2026-04-05 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:34.411868 | orchestrator | 2026-04-05 04:25:34 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:34.412502 | orchestrator | 2026-04-05 04:25:34 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:34.412539 | orchestrator | 2026-04-05 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:37.450735 | orchestrator | 2026-04-05 04:25:37 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:37.452798 | orchestrator | 2026-04-05 04:25:37 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:37.452891 | orchestrator | 2026-04-05 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:40.499970 | orchestrator | 2026-04-05 04:25:40 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:40.500977 | orchestrator | 2026-04-05 04:25:40 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:40.501378 | orchestrator | 2026-04-05 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:43.540319 | orchestrator | 2026-04-05 04:25:43 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:43.542301 | orchestrator | 2026-04-05 04:25:43 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:43.542374 | orchestrator | 2026-04-05 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:46.580255 | orchestrator | 2026-04-05 04:25:46 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:46.581487 | orchestrator | 2026-04-05 04:25:46 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:46.581530 | orchestrator | 2026-04-05 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:49.624707 | orchestrator | 2026-04-05 04:25:49 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:49.625408 | orchestrator | 2026-04-05 04:25:49 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:49.625436 | orchestrator | 2026-04-05 04:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:52.664684 | orchestrator | 2026-04-05 04:25:52 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:52.665649 | orchestrator | 2026-04-05 04:25:52 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:52.665683 | orchestrator | 2026-04-05 04:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:55.719364 | orchestrator | 2026-04-05 04:25:55 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:55.721097 | orchestrator | 2026-04-05 04:25:55 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:55.721288 | orchestrator | 2026-04-05 04:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:25:58.768160 | orchestrator | 2026-04-05 04:25:58 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:25:58.770250 | orchestrator | 2026-04-05 04:25:58 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:25:58.770505 | orchestrator | 2026-04-05 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:01.816546 | orchestrator | 2026-04-05 04:26:01 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:01.818706 | orchestrator | 2026-04-05 04:26:01 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:01.818767 | orchestrator | 2026-04-05 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:04.864707 | orchestrator | 2026-04-05 04:26:04 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:04.865974 | orchestrator | 2026-04-05 04:26:04 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:04.866089 | orchestrator | 2026-04-05 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:07.914419 | orchestrator | 2026-04-05 04:26:07 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:07.916114 | orchestrator | 2026-04-05 04:26:07 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:07.916158 | orchestrator | 2026-04-05 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:10.970205 | orchestrator | 2026-04-05 04:26:10 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:10.971570 | orchestrator | 2026-04-05 04:26:10 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:10.971735 | orchestrator | 2026-04-05 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:14.028736 | orchestrator | 2026-04-05 04:26:14 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:14.030945 | orchestrator | 2026-04-05 04:26:14 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:14.031016 | orchestrator | 2026-04-05 04:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:17.072893 | orchestrator | 2026-04-05 04:26:17 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:17.074594 | orchestrator | 2026-04-05 04:26:17 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:17.074678 | orchestrator | 2026-04-05 04:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:20.117129 | orchestrator | 2026-04-05 04:26:20 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:20.119134 | orchestrator | 2026-04-05 04:26:20 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:20.119329 | orchestrator | 2026-04-05 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:23.168048 | orchestrator | 2026-04-05 04:26:23 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:23.168497 | orchestrator | 2026-04-05 04:26:23 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:23.168527 | orchestrator | 2026-04-05 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:26.221468 | orchestrator | 2026-04-05 04:26:26 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:26.223998 | orchestrator | 2026-04-05 04:26:26 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:26.224126 | orchestrator | 2026-04-05 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:29.277470 | orchestrator | 2026-04-05 04:26:29 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:29.280354 | orchestrator | 2026-04-05 04:26:29 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:29.280437 | orchestrator | 2026-04-05 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:32.324914 | orchestrator | 2026-04-05 04:26:32 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:32.328280 | orchestrator | 2026-04-05 04:26:32 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:32.328435 | orchestrator | 2026-04-05 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:35.371047 | orchestrator | 2026-04-05 04:26:35 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:35.371590 | orchestrator | 2026-04-05 04:26:35 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:35.371712 | orchestrator | 2026-04-05 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:38.427117 | orchestrator | 2026-04-05 04:26:38 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:38.429473 | orchestrator | 2026-04-05 04:26:38 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:38.429566 | orchestrator | 2026-04-05 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:41.471765 | orchestrator | 2026-04-05 04:26:41 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:41.475000 | orchestrator | 2026-04-05 04:26:41 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:41.475094 | orchestrator | 2026-04-05 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:44.525768 | orchestrator | 2026-04-05 04:26:44 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:44.528144 | orchestrator | 2026-04-05 04:26:44 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:44.528333 | orchestrator | 2026-04-05 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:47.568195 | orchestrator | 2026-04-05 04:26:47 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:47.571671 | orchestrator | 2026-04-05 04:26:47 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:47.571762 | orchestrator | 2026-04-05 04:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:50.629187 | orchestrator | 2026-04-05 04:26:50 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:50.630763 | orchestrator | 2026-04-05 04:26:50 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:50.630931 | orchestrator | 2026-04-05 04:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:53.674388 | orchestrator | 2026-04-05 04:26:53 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:53.675040 | orchestrator | 2026-04-05 04:26:53 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:53.675079 | orchestrator | 2026-04-05 04:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:56.723424 | orchestrator | 2026-04-05 04:26:56 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:56.725680 | orchestrator | 2026-04-05 04:26:56 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:56.725742 | orchestrator | 2026-04-05 04:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:26:59.771975 | orchestrator | 2026-04-05 04:26:59 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:26:59.773046 | orchestrator | 2026-04-05 04:26:59 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:26:59.773080 | orchestrator | 2026-04-05 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:02.826588 | orchestrator | 2026-04-05 04:27:02 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:02.828618 | orchestrator | 2026-04-05 04:27:02 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:02.828653 | orchestrator | 2026-04-05 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:05.875795 | orchestrator | 2026-04-05 04:27:05 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:05.877383 | orchestrator | 2026-04-05 04:27:05 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:05.877474 | orchestrator | 2026-04-05 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:08.928727 | orchestrator | 2026-04-05 04:27:08 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:08.929979 | orchestrator | 2026-04-05 04:27:08 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:08.930076 | orchestrator | 2026-04-05 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:11.987307 | orchestrator | 2026-04-05 04:27:11 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:11.988537 | orchestrator | 2026-04-05 04:27:11 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:11.988631 | orchestrator | 2026-04-05 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:15.044226 | orchestrator | 2026-04-05 04:27:15 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:15.047016 | orchestrator | 2026-04-05 04:27:15 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:15.047097 | orchestrator | 2026-04-05 04:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:18.090720 | orchestrator | 2026-04-05 04:27:18 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:18.091727 | orchestrator | 2026-04-05 04:27:18 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:18.091765 | orchestrator | 2026-04-05 04:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:21.127759 | orchestrator | 2026-04-05 04:27:21 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:21.128660 | orchestrator | 2026-04-05 04:27:21 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:21.128706 | orchestrator | 2026-04-05 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:24.177900 | orchestrator | 2026-04-05 04:27:24 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:24.180033 | orchestrator | 2026-04-05 04:27:24 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:24.180105 | orchestrator | 2026-04-05 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:27.229767 | orchestrator | 2026-04-05 04:27:27 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:27.232105 | orchestrator | 2026-04-05 04:27:27 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:27.232185 | orchestrator | 2026-04-05 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:30.277040 | orchestrator | 2026-04-05 04:27:30 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:30.277800 | orchestrator | 2026-04-05 04:27:30 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:30.278142 | orchestrator | 2026-04-05 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:33.328722 | orchestrator | 2026-04-05 04:27:33 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:33.330725 | orchestrator | 2026-04-05 04:27:33 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:33.330768 | orchestrator | 2026-04-05 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:36.375906 | orchestrator | 2026-04-05 04:27:36 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:36.380536 | orchestrator | 2026-04-05 04:27:36 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:36.380637 | orchestrator | 2026-04-05 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:39.424000 | orchestrator | 2026-04-05 04:27:39 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:39.425699 | orchestrator | 2026-04-05 04:27:39 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:39.425763 | orchestrator | 2026-04-05 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:42.472492 | orchestrator | 2026-04-05 04:27:42 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:42.474742 | orchestrator | 2026-04-05 04:27:42 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:42.474784 | orchestrator | 2026-04-05 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:45.517597 | orchestrator | 2026-04-05 04:27:45 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:45.518850 | orchestrator | 2026-04-05 04:27:45 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:45.518881 | orchestrator | 2026-04-05 04:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:48.582780 | orchestrator | 2026-04-05 04:27:48 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:48.584507 | orchestrator | 2026-04-05 04:27:48 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:48.584551 | orchestrator | 2026-04-05 04:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:51.635757 | orchestrator | 2026-04-05 04:27:51 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:51.637240 | orchestrator | 2026-04-05 04:27:51 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:51.637288 | orchestrator | 2026-04-05 04:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:54.686270 | orchestrator | 2026-04-05 04:27:54 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:54.687865 | orchestrator | 2026-04-05 04:27:54 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:54.687925 | orchestrator | 2026-04-05 04:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:27:57.735663 | orchestrator | 2026-04-05 04:27:57 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:27:57.738163 | orchestrator | 2026-04-05 04:27:57 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:27:57.738246 | orchestrator | 2026-04-05 04:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:00.789886 | orchestrator | 2026-04-05 04:28:00 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:00.791570 | orchestrator | 2026-04-05 04:28:00 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:00.791617 | orchestrator | 2026-04-05 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:03.842788 | orchestrator | 2026-04-05 04:28:03 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:03.845103 | orchestrator | 2026-04-05 04:28:03 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:03.845172 | orchestrator | 2026-04-05 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:06.894272 | orchestrator | 2026-04-05 04:28:06 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:06.895114 | orchestrator | 2026-04-05 04:28:06 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:06.895162 | orchestrator | 2026-04-05 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:09.945433 | orchestrator | 2026-04-05 04:28:09 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:09.947112 | orchestrator | 2026-04-05 04:28:09 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:09.947385 | orchestrator | 2026-04-05 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:12.999893 | orchestrator | 2026-04-05 04:28:12 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:13.002004 | orchestrator | 2026-04-05 04:28:13 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:13.002093 | orchestrator | 2026-04-05 04:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:16.039066 | orchestrator | 2026-04-05 04:28:16 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:28:16.040053 | orchestrator | 2026-04-05 04:28:16 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:28:16.040100 | orchestrator | 2026-04-05 04:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:28:19.086094 | orchestrator | 2026-04-05 04:28:19 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:30:19.194997 | orchestrator | 2026-04-05 04:30:19 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:30:19.195149 | orchestrator | 2026-04-05 04:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:30:22.235525 | orchestrator | 2026-04-05 04:30:22 | INFO  | Task 4b2f39f3-48fc-4b60-b795-ddad107a749f is in state STARTED 2026-04-05 04:30:22.236620 | orchestrator | 2026-04-05 04:30:22 | INFO  | Task 470acebf-b2f0-4009-9f38-3f43b0aca299 is in state STARTED 2026-04-05 04:30:22.236676 | orchestrator | 2026-04-05 04:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 04:30:25.098060 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-05 04:30:25.101478 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 04:30:25.866747 | 2026-04-05 04:30:25.866971 | PLAY [Post output play] 2026-04-05 04:30:25.883355 | 2026-04-05 04:30:25.883504 | LOOP [stage-output : Register sources] 2026-04-05 04:30:25.956585 | 2026-04-05 04:30:25.957056 | TASK [stage-output : Check sudo] 2026-04-05 04:30:26.863419 | orchestrator | sudo: a password is required 2026-04-05 04:30:27.004280 | orchestrator | ok: Runtime: 0:00:00.017795 2026-04-05 04:30:27.018101 | 2026-04-05 04:30:27.018260 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-05 04:30:27.061677 | 2026-04-05 04:30:27.061984 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-05 04:30:27.131110 | orchestrator | ok 2026-04-05 04:30:27.139981 | 2026-04-05 04:30:27.140126 | LOOP [stage-output : Ensure target folders exist] 2026-04-05 04:30:27.594564 | orchestrator | ok: "docs" 2026-04-05 04:30:27.594998 | 2026-04-05 04:30:27.867784 | orchestrator | ok: "artifacts" 2026-04-05 04:30:28.151153 | orchestrator | ok: "logs" 2026-04-05 04:30:28.165829 | 2026-04-05 04:30:28.165988 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-05 04:30:28.198050 | 2026-04-05 04:30:28.198261 | TASK [stage-output : Make all log files readable] 2026-04-05 04:30:28.507193 | orchestrator | ok 2026-04-05 04:30:28.516728 | 2026-04-05 04:30:28.516864 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-05 04:30:28.552159 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:28.570133 | 2026-04-05 04:30:28.570307 | TASK [stage-output : Discover log files for compression] 2026-04-05 04:30:28.596155 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:28.612539 | 2026-04-05 04:30:28.612709 | LOOP [stage-output : Archive everything from logs] 2026-04-05 04:30:28.657979 | 2026-04-05 04:30:28.658167 | PLAY [Post cleanup play] 2026-04-05 04:30:28.666716 | 2026-04-05 04:30:28.666872 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 04:30:28.725714 | orchestrator | ok 2026-04-05 04:30:28.738471 | 2026-04-05 04:30:28.738612 | TASK [Set cloud fact (local deployment)] 2026-04-05 04:30:28.773533 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:28.789943 | 2026-04-05 04:30:28.790110 | TASK [Clean the cloud environment] 2026-04-05 04:30:29.448654 | orchestrator | 2026-04-05 04:30:29 - clean up servers 2026-04-05 04:30:30.333330 | orchestrator | 2026-04-05 04:30:30 - testbed-manager 2026-04-05 04:30:30.423189 | orchestrator | 2026-04-05 04:30:30 - testbed-node-0 2026-04-05 04:30:30.509363 | orchestrator | 2026-04-05 04:30:30 - testbed-node-5 2026-04-05 04:30:30.606605 | orchestrator | 2026-04-05 04:30:30 - testbed-node-1 2026-04-05 04:30:30.698174 | orchestrator | 2026-04-05 04:30:30 - testbed-node-4 2026-04-05 04:30:30.790843 | orchestrator | 2026-04-05 04:30:30 - testbed-node-3 2026-04-05 04:30:30.895426 | orchestrator | 2026-04-05 04:30:30 - testbed-node-2 2026-04-05 04:30:30.985914 | orchestrator | 2026-04-05 04:30:30 - clean up keypairs 2026-04-05 04:30:31.007911 | orchestrator | 2026-04-05 04:30:31 - testbed 2026-04-05 04:30:31.035418 | orchestrator | 2026-04-05 04:30:31 - wait for servers to be gone 2026-04-05 04:30:41.889934 | orchestrator | 2026-04-05 04:30:41 - clean up ports 2026-04-05 04:30:42.135089 | orchestrator | 2026-04-05 04:30:42 - 3db35acb-37d4-4ce2-a2c9-9c46e8e6499d 2026-04-05 04:30:42.497741 | orchestrator | 2026-04-05 04:30:42 - 3f9e16ba-e2aa-4c3c-b5d8-78368c1ad9bb 2026-04-05 04:30:42.759854 | orchestrator | 2026-04-05 04:30:42 - 4b15b753-0114-424d-811d-7e8fe9301539 2026-04-05 04:30:43.233240 | orchestrator | 2026-04-05 04:30:43 - 5cccf19d-e496-4c6a-99fd-879f354d82eb 2026-04-05 04:30:43.549215 | orchestrator | 2026-04-05 04:30:43 - 698e800f-43ba-46f2-b732-d4eb9ba60aef 2026-04-05 04:30:43.870974 | orchestrator | 2026-04-05 04:30:43 - 9131712c-f2a9-4ecb-8ba3-ba9a62e20d32 2026-04-05 04:30:44.134714 | orchestrator | 2026-04-05 04:30:44 - bfdbeb62-5294-4214-a5a5-ac08b5cc22ba 2026-04-05 04:30:44.395866 | orchestrator | 2026-04-05 04:30:44 - clean up volumes 2026-04-05 04:30:44.527790 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-1-node-base 2026-04-05 04:30:44.569466 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-4-node-base 2026-04-05 04:30:44.620531 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-2-node-base 2026-04-05 04:30:44.665532 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-manager-base 2026-04-05 04:30:44.715610 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-5-node-base 2026-04-05 04:30:44.764039 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-3-node-base 2026-04-05 04:30:44.806286 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-0-node-base 2026-04-05 04:30:44.855719 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-2-node-5 2026-04-05 04:30:44.897830 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-1-node-4 2026-04-05 04:30:44.937907 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-4-node-4 2026-04-05 04:30:44.986248 | orchestrator | 2026-04-05 04:30:44 - testbed-volume-8-node-5 2026-04-05 04:30:45.032003 | orchestrator | 2026-04-05 04:30:45 - testbed-volume-0-node-3 2026-04-05 04:30:45.072602 | orchestrator | 2026-04-05 04:30:45 - testbed-volume-7-node-4 2026-04-05 04:30:45.123603 | orchestrator | 2026-04-05 04:30:45 - testbed-volume-3-node-3 2026-04-05 04:30:45.172817 | orchestrator | 2026-04-05 04:30:45 - testbed-volume-5-node-5 2026-04-05 04:30:45.224259 | orchestrator | 2026-04-05 04:30:45 - testbed-volume-6-node-3 2026-04-05 04:30:45.272395 | orchestrator | 2026-04-05 04:30:45 - disconnect routers 2026-04-05 04:30:45.410520 | orchestrator | 2026-04-05 04:30:45 - testbed 2026-04-05 04:30:46.963140 | orchestrator | 2026-04-05 04:30:46 - clean up subnets 2026-04-05 04:30:47.023245 | orchestrator | 2026-04-05 04:30:47 - subnet-testbed-management 2026-04-05 04:30:47.221925 | orchestrator | 2026-04-05 04:30:47 - clean up networks 2026-04-05 04:30:47.413892 | orchestrator | 2026-04-05 04:30:47 - net-testbed-management 2026-04-05 04:30:47.759810 | orchestrator | 2026-04-05 04:30:47 - clean up security groups 2026-04-05 04:30:47.808799 | orchestrator | 2026-04-05 04:30:47 - testbed-node 2026-04-05 04:30:47.940780 | orchestrator | 2026-04-05 04:30:47 - testbed-management 2026-04-05 04:30:48.071799 | orchestrator | 2026-04-05 04:30:48 - clean up floating ips 2026-04-05 04:30:48.112255 | orchestrator | 2026-04-05 04:30:48 - 81.163.192.32 2026-04-05 04:30:48.532021 | orchestrator | 2026-04-05 04:30:48 - clean up routers 2026-04-05 04:30:48.664441 | orchestrator | 2026-04-05 04:30:48 - testbed 2026-04-05 04:30:50.351077 | orchestrator | ok: Runtime: 0:00:21.022896 2026-04-05 04:30:50.354573 | 2026-04-05 04:30:50.354702 | PLAY RECAP 2026-04-05 04:30:50.354797 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-05 04:30:50.354880 | 2026-04-05 04:30:50.502812 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 04:30:50.504001 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 04:30:51.319271 | 2026-04-05 04:30:51.319441 | PLAY [Cleanup play] 2026-04-05 04:30:51.340812 | 2026-04-05 04:30:51.341037 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 04:30:51.404267 | orchestrator | ok 2026-04-05 04:30:51.414995 | 2026-04-05 04:30:51.415159 | TASK [Set cloud fact (local deployment)] 2026-04-05 04:30:51.449994 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:51.462982 | 2026-04-05 04:30:51.463115 | TASK [Clean the cloud environment] 2026-04-05 04:30:52.678739 | orchestrator | 2026-04-05 04:30:52 - clean up servers 2026-04-05 04:30:53.309409 | orchestrator | 2026-04-05 04:30:53 - clean up keypairs 2026-04-05 04:30:53.327523 | orchestrator | 2026-04-05 04:30:53 - wait for servers to be gone 2026-04-05 04:30:53.369078 | orchestrator | 2026-04-05 04:30:53 - clean up ports 2026-04-05 04:30:53.459333 | orchestrator | 2026-04-05 04:30:53 - clean up volumes 2026-04-05 04:30:53.540647 | orchestrator | 2026-04-05 04:30:53 - disconnect routers 2026-04-05 04:30:53.565423 | orchestrator | 2026-04-05 04:30:53 - clean up subnets 2026-04-05 04:30:53.593397 | orchestrator | 2026-04-05 04:30:53 - clean up networks 2026-04-05 04:30:53.790176 | orchestrator | 2026-04-05 04:30:53 - clean up security groups 2026-04-05 04:30:53.830108 | orchestrator | 2026-04-05 04:30:53 - clean up floating ips 2026-04-05 04:30:53.871167 | orchestrator | 2026-04-05 04:30:53 - clean up routers 2026-04-05 04:30:54.031023 | orchestrator | ok: Runtime: 0:00:01.618940 2026-04-05 04:30:54.034296 | 2026-04-05 04:30:54.034434 | PLAY RECAP 2026-04-05 04:30:54.034535 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-05 04:30:54.034602 | 2026-04-05 04:30:54.161302 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 04:30:54.163980 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 04:30:54.903215 | 2026-04-05 04:30:54.903384 | PLAY [Base post-fetch] 2026-04-05 04:30:54.919267 | 2026-04-05 04:30:54.919418 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-05 04:30:54.975476 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:54.990090 | 2026-04-05 04:30:54.990297 | TASK [fetch-output : Set log path for single node] 2026-04-05 04:30:55.039232 | orchestrator | ok 2026-04-05 04:30:55.049087 | 2026-04-05 04:30:55.049280 | LOOP [fetch-output : Ensure local output dirs] 2026-04-05 04:30:55.530295 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/logs" 2026-04-05 04:30:55.832024 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/artifacts" 2026-04-05 04:30:56.105966 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1a84e86d2ef42df8cc88d5fcfa34ba1/work/docs" 2026-04-05 04:30:56.124762 | 2026-04-05 04:30:56.124921 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-05 04:30:57.033088 | orchestrator | changed: .d..t...... ./ 2026-04-05 04:30:57.033339 | orchestrator | changed: All items complete 2026-04-05 04:30:57.033386 | 2026-04-05 04:30:57.757024 | orchestrator | changed: .d..t...... ./ 2026-04-05 04:30:58.496145 | orchestrator | changed: .d..t...... ./ 2026-04-05 04:30:58.521854 | 2026-04-05 04:30:58.522045 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-05 04:30:58.559821 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:58.570654 | orchestrator | skipping: Conditional result was False 2026-04-05 04:30:58.591775 | 2026-04-05 04:30:58.591893 | PLAY RECAP 2026-04-05 04:30:58.591994 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-05 04:30:58.592038 | 2026-04-05 04:30:58.725747 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 04:30:58.728294 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 04:30:59.501702 | 2026-04-05 04:30:59.501866 | PLAY [Base post] 2026-04-05 04:30:59.516532 | 2026-04-05 04:30:59.516669 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-05 04:31:00.529532 | orchestrator | changed 2026-04-05 04:31:00.540183 | 2026-04-05 04:31:00.540324 | PLAY RECAP 2026-04-05 04:31:00.540406 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-05 04:31:00.540492 | 2026-04-05 04:31:00.669205 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 04:31:00.670274 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-05 04:31:01.471000 | 2026-04-05 04:31:01.471184 | PLAY [Base post-logs] 2026-04-05 04:31:01.482222 | 2026-04-05 04:31:01.482361 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-05 04:31:01.968085 | localhost | changed 2026-04-05 04:31:01.988855 | 2026-04-05 04:31:01.989082 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-05 04:31:02.033079 | localhost | ok 2026-04-05 04:31:02.041231 | 2026-04-05 04:31:02.041439 | TASK [Set zuul-log-path fact] 2026-04-05 04:31:02.060229 | localhost | ok 2026-04-05 04:31:02.077058 | 2026-04-05 04:31:02.077217 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 04:31:02.105404 | localhost | ok 2026-04-05 04:31:02.108736 | 2026-04-05 04:31:02.108845 | TASK [upload-logs : Create log directories] 2026-04-05 04:31:02.718026 | localhost | changed 2026-04-05 04:31:02.722449 | 2026-04-05 04:31:02.722603 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-05 04:31:03.224295 | localhost -> localhost | ok: Runtime: 0:00:00.010698 2026-04-05 04:31:03.228487 | 2026-04-05 04:31:03.228604 | TASK [upload-logs : Upload logs to log server] 2026-04-05 04:31:03.818284 | localhost | Output suppressed because no_log was given 2026-04-05 04:31:03.821413 | 2026-04-05 04:31:03.821578 | LOOP [upload-logs : Compress console log and json output] 2026-04-05 04:31:03.880498 | localhost | skipping: Conditional result was False 2026-04-05 04:31:03.886368 | localhost | skipping: Conditional result was False 2026-04-05 04:31:03.897560 | 2026-04-05 04:31:03.897755 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-05 04:31:03.964994 | localhost | skipping: Conditional result was False 2026-04-05 04:31:03.965467 | 2026-04-05 04:31:03.972361 | localhost | skipping: Conditional result was False 2026-04-05 04:31:03.983049 | 2026-04-05 04:31:03.983281 | LOOP [upload-logs : Upload console log and json output]